Awesome
<a id="x-28MGL-3A-40MGL-MANUAL-20MGL-PAX-3ASECTION-29"></a>
MGL Manual
Table of Contents
- 1
MGL
ASDF System - 2 Introduction
- 3 Datasets
- 4 Resampling
- 5 Core
- 6 Monitoring
- 7 Classification
- 8 Features
- 9 Gradient Based Optimization
- 10 Differentiable Functions
- 11 Backpropagation Neural Networks
- 12 Boltzmann Machines
- 13 Gaussian Processes
- 14 Natural Language Processing
[in package MGL]
<a id="x-28-22mgl-22-20ASDF-2FSYSTEM-3ASYSTEM-29"></a>
1 MGL
ASDF System
- Version: 0.1.0
- Description:
MGL
is a machine learning library for backpropagation neural networks, boltzmann machines, gaussian processes and more. - Licence: MIT, see COPYING.
- Author: Gábor Melis mega@retes.hu
- Mailto: mega@retes.hu
- Homepage: http://melisgl.github.io/mgl
- Bug tracker: https://github.com/melisgl/mgl/issues
- Source control: GIT
<a id="x-28MGL-3A-40MGL-INTRODUCTION-20MGL-PAX-3ASECTION-29"></a>
2 Introduction
<a id="x-28MGL-3A-40MGL-OVERVIEW-20MGL-PAX-3ASECTION-29"></a>
2.1 Overview
MGL is a Common Lisp machine learning library by Gábor Melis with some parts originally contributed by Ravenpack International. It mainly concentrates on various forms of neural networks (boltzmann machines, feed-forward and recurrent backprop nets). Most of MGL is built on top of MGL-MAT so it has BLAS and CUDA support.
In general, the focus is on power and performance not on ease of use. Perhaps one day there will be a cookie cutter interface with restricted functionality if a reasonable compromise is found between power and utility.
<a id="x-28MGL-3A-40MGL-LINKS-20MGL-PAX-3ASECTION-29"></a>
2.2 Links
Here is the official repository and the HTML documentation for the latest version.
<a id="x-28MGL-3A-40MGL-DEPENDENCIES-20MGL-PAX-3ASECTION-29"></a>
2.3 Dependencies
MGL used to rely on LLA to interface to BLAS and LAPACK. That's mostly history by now, but configuration of foreign libraries is still done via LLA. See the README in LLA on how to set things up. Note that these days OpenBLAS is easier to set up and just as fast as ATLAS.
CL-CUDA and
MGL-MAT are the two main
dependencies and also the ones not yet in quicklisp, so just drop
them into quicklisp/local-projects/
. If there is no suitable GPU
on the system or the CUDA SDK is not installed, MGL will simply
fall back on using BLAS and Lisp code. Wrapping code in
MGL-MAT:WITH-CUDA*
is basically all that's needed to run on the GPU,
and with MGL-MAT:CUDA-AVAILABLE-P
one can check whether the GPU is
really being used.
<a id="x-28MGL-3A-40MGL-CODE-ORGANIZATION-20MGL-PAX-3ASECTION-29"></a>
2.4 Code Organization
MGL consists of several packages dedicated to different tasks.
For example, package MGL-RESAMPLE
is about
Resampling and MGL-GD
is about Gradient Descent
and so on. On one hand, having many packages makes it easier to
cleanly separate API and implementation and also to explore into a
specific task. At other times, they can be a hassle, so the MGL
package itself reexports every external symbol found in all the
other packages that make up MGL and MGL-MAT (see
MGL-MAT::@MAT-MANUAL
) on which it heavily relies.
One exception to this rule is the bundled, but independent MGL-GNUPLOT library.
The built in tests can be run with:
(ASDF:OOS 'ASDF:TEST-OP '#:MGL)
Note, that most of the tests are rather stochastic and can fail once in a while.
<a id="x-28MGL-3A-40MGL-GLOSSARY-20MGL-PAX-3ASECTION-29"></a>
2.5 Glossary
Ultimately machine learning is about creating models of some domain. The observations in the modelled domain are called instances (also known as examples or samples). Sets of instances are called datasets. Datasets are used when fitting a model or when making predictions. Sometimes the word predictions is too specific, and the results obtained from applying a model to some instances are simply called results.
<a id="x-28MGL-DATASET-3A-40MGL-DATASET-20MGL-PAX-3ASECTION-29"></a>
3 Datasets
[in package MGL-DATASET]
An instance can often be any kind of object of the user's choice.
It is typically represented by a set of numbers which is called a
feature vector or by a structure holding the feature vector, the
label, etc. A dataset is a SEQUENCE
of such instances or a
Samplers object that produces instances.
<a id="x-28MGL-DATASET-3AMAP-DATASET-20FUNCTION-29"></a>
-
[function] MAP-DATASET FN DATASET
Call
FN
with each instance inDATASET
. This is basically equivalent to iterating over the elements of a sequence or a sampler (see Samplers).
<a id="x-28MGL-DATASET-3AMAP-DATASETS-20FUNCTION-29"></a>
-
[function] MAP-DATASETS FN DATASETS &KEY (IMPUTE NIL IMPUTEP)
Call
FN
with a list of instances, one from each dataset inDATASETS
. Return nothing. IfIMPUTE
is specified then iterate until the largest dataset is consumed imputingIMPUTE
for missing values. IfIMPUTE
is not specified then iterate until the smallest dataset runs out.(map-datasets #'prin1 '((0 1 2) (:a :b))) .. (0 :A)(1 :B) (map-datasets #'prin1 '((0 1 2) (:a :b)) :impute nil) .. (0 :A)(1 :B)(2 NIL)
It is of course allowed to mix sequences with samplers:
(map-datasets #'prin1 (list '(0 1 2) (make-sequence-sampler '(:a :b) :max-n-samples 2))) .. (0 :A)(1 :B)
<a id="x-28MGL-DATASET-3A-40MGL-SAMPLER-20MGL-PAX-3ASECTION-29"></a>
3.1 Samplers
Some algorithms do not need random access to the entire dataset and
can work with a stream observations. Samplers are simple generators
providing two functions: SAMPLE
and FINISHEDP
.
<a id="x-28MGL-DATASET-3ASAMPLE-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SAMPLE SAMPLER
If
SAMPLER
has not run out of data (seeFINISHEDP
)SAMPLE
returns an object that represents a sample from the world to be experienced or, in other words, simply something the can be used as input for training or prediction. It is not allowed to callSAMPLE
ifSAMPLER
isFINISHEDP
.
<a id="x-28MGL-DATASET-3AFINISHEDP-20GENERIC-FUNCTION-29"></a>
-
[generic-function] FINISHEDP SAMPLER
See if
SAMPLER
has run out of examples.
<a id="x-28MGL-DATASET-3ALIST-SAMPLES-20FUNCTION-29"></a>
-
[function] LIST-SAMPLES SAMPLER MAX-SIZE
Return a list of samples of length at most
MAX-SIZE
or less ifSAMPLER
runs out.
<a id="x-28MGL-DATASET-3AMAKE-SEQUENCE-SAMPLER-20FUNCTION-29"></a>
-
[function] MAKE-SEQUENCE-SAMPLER SEQ &KEY MAX-N-SAMPLES
Create a sampler that returns elements of
SEQ
in their original order. IfMAX-N-SAMPLES
is non-nil, then at mostMAX-N-SAMPLES
are sampled.
<a id="x-28MGL-DATASET-3AMAKE-RANDOM-SAMPLER-20FUNCTION-29"></a>
-
[function] MAKE-RANDOM-SAMPLER SEQ &KEY MAX-N-SAMPLES (REORDER #'MGL-RESAMPLE:SHUFFLE)
Create a sampler that returns elements of
SEQ
in random order. IfMAX-N-SAMPLES
is non-nil, then at mostMAX-N-SAMPLES
are sampled. The first pass over a shuffled copy ofSEQ
, and this copy is reshuffled whenever the sampler reaches the end of it. Shuffling is performed by calling theREORDER
function.
<a id="x-28MGL-DATASET-3A-2AINFINITELY-EMPTY-DATASET-2A-20VARIABLE-29"></a>
-
[variable] *INFINITELY-EMPTY-DATASET* #<FUNCTION-SAMPLER "infinitely empty" >
This is the default dataset for
MGL-OPT:MINIMIZE
. It's an infinite stream ofNIL
s.
<a id="x-28MGL-DATASET-3A-40MGL-SAMPLER-FUNCTION-SAMPLER-20MGL-PAX-3ASECTION-29"></a>
3.1.1 Function Sampler
<a id="x-28MGL-DATASET-3AFUNCTION-SAMPLER-20CLASS-29"></a>
-
[class] FUNCTION-SAMPLER
A sampler with a function in its
GENERATOR
that produces a stream of samples which may or may not be finite depending onMAX-N-SAMPLES
.FINISHEDP
returnsT
iffMAX-N-SAMPLES
is non-nil, and it's not greater than the number of samples generated (N-SAMPLES
).(list-samples (make-instance 'function-sampler :generator (lambda () (random 10)) :max-n-samples 5) 10) => (3 5 2 3 3)
<a id="x-28MGL-DATASET-3AGENERATOR-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29"></a>
-
[reader] GENERATOR FUNCTION-SAMPLER (:GENERATOR)
A generator function of no arguments that returns the next sample.
<a id="x-28MGL-DATASET-3AMAX-N-SAMPLES-20-28MGL-PAX-3AACCESSOR-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29"></a>
- [accessor] MAX-N-SAMPLES FUNCTION-SAMPLER (:MAX-N-SAMPLES = NIL)
<a id="x-28MGL-COMMON-3ANAME-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29"></a>
-
[reader] NAME FUNCTION-SAMPLER (:NAME = NIL)
An arbitrary object naming the sampler. Only used for printing the sampler object.
<a id="x-28MGL-DATASET-3AN-SAMPLES-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29"></a>
- [reader] N-SAMPLES FUNCTION-SAMPLER (:N-SAMPLES = 0)
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-20MGL-PAX-3ASECTION-29"></a>
4 Resampling
[in package MGL-RESAMPLE]
The focus of this package is on resampling methods such as cross-validation and bagging which can be used for model evaluation, model selection, and also as a simple form of ensembling. Data partitioning and sampling functions are also provided because they tend to be used together with resampling.
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-PARTITIONS-20MGL-PAX-3ASECTION-29"></a>
4.1 Partitions
The following functions partition a dataset (currently only
SEQUENCE
s are supported) into a number of partitions. For each
element in the original dataset there is exactly one partition that
contains it.
<a id="x-28MGL-RESAMPLE-3AFRACTURE-20FUNCTION-29"></a>
-
[function] FRACTURE FRACTIONS SEQ &KEY WEIGHT
Partition
SEQ
into a number of subsequences.FRACTIONS
is either a positive integer or a list of non-negative real numbers.WEIGHT
isNIL
or a function that returns a non-negative real number when called with an element fromSEQ
. IfFRACTIONS
is a positive integer then return a list of that many subsequences with equal sum of weights bar rounding errors, else partitionSEQ
into subsequences, where the sum of weights of subsequence I is proportional to element I ofFRACTIONS
. IfWEIGHT
isNIL
, then it's element is assumed to have the same weight.To split into 5 sequences:
(fracture 5 '(0 1 2 3 4 5 6 7 8 9)) => ((0 1) (2 3) (4 5) (6 7) (8 9))
To split into two sequences whose lengths are proportional to 2 and 3:
(fracture '(2 3) '(0 1 2 3 4 5 6 7 8 9)) => ((0 1 2 3) (4 5 6 7 8 9))
<a id="x-28MGL-RESAMPLE-3ASTRATIFY-20FUNCTION-29"></a>
-
[function] STRATIFY SEQ &KEY (KEY #'IDENTITY) (TEST #'EQL)
Return the list of strata of
SEQ
.SEQ
is a sequence of elements for which the functionKEY
returns the class they belong to. Such classes are opaque objects compared for equality withTEST
. A stratum is a sequence of elements with the same (underTEST
)KEY
.(stratify '(0 1 2 3 4 5 6 7 8 9) :key #'evenp) => ((0 2 4 6 8) (1 3 5 7 9))
<a id="x-28MGL-RESAMPLE-3AFRACTURE-STRATIFIED-20FUNCTION-29"></a>
-
[function] FRACTURE-STRATIFIED FRACTIONS SEQ &KEY (KEY #'IDENTITY) (TEST #'EQL) WEIGHT
Similar to
FRACTURE
, but also makes sure that keys are evenly distributed among the partitions (seeSTRATIFY
). It can be useful for classification tasks to partition the data set while keeping the distribution of classes the same.Note that the sets returned are not in random order. In fact, they are sorted internally by
KEY
.For example, to make two splits with approximately the same number of even and odd numbers:
(fracture-stratified 2 '(0 1 2 3 4 5 6 7 8 9) :key #'evenp) => ((0 2 1 3) (4 6 8 5 7 9))
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CROSS-VALIDATION-20MGL-PAX-3ASECTION-29"></a>
4.2 Cross-validation
<a id="x-28MGL-RESAMPLE-3ACROSS-VALIDATE-20FUNCTION-29"></a>
-
[function] CROSS-VALIDATE DATA FN &KEY (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN #'SPLIT-FOLD/MOD) PASS-FOLD
Map
FN
over theFOLDS
ofDATA
split withSPLIT-FN
and collect the results in a list. The simplest demonstration is:(cross-validate '(0 1 2 3 4) (lambda (test training) (list test training)) :n-folds 5) => (((0) (1 2 3 4)) ((1) (0 2 3 4)) ((2) (0 1 3 4)) ((3) (0 1 2 4)) ((4) (0 1 2 3)))
Of course, in practice one would typically train a model and return the trained model and/or its score on
TEST
. Also, sometimes one may want to do only some of the folds and remember which ones they were:(cross-validate '(0 1 2 3 4) (lambda (fold test training) (list :fold fold test training)) :folds '(2 3) :pass-fold t) => ((:fold 2 (2) (0 1 3 4)) (:fold 3 (3) (0 1 2 4)))
Finally, the way the data is split can be customized. By default
SPLIT-FOLD/MOD
is called with the argumentsDATA
, the fold (from amongFOLDS
) andN-FOLDS
.SPLIT-FOLD/MOD
returns two values which are then passed on toFN
. One can useSPLIT-FOLD/CONT
orSPLIT-STRATIFIED
or any other function that works with these arguments. The only real constraint is thatFN
has to take as many arguments (plus the fold argument ifPASS-FOLD
) asSPLIT-FN
returns.
<a id="x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FMOD-20FUNCTION-29"></a>
-
[function] SPLIT-FOLD/MOD SEQ FOLD N-FOLDS
Partition
SEQ
into two sequences: one with elements ofSEQ
with indices whose remainder isFOLD
when divided withN-FOLDS
, and a second one with the rest. The second one is the larger set. The order of elements remains stable. This function is suitable as theSPLIT-FN
argument ofCROSS-VALIDATE
.
<a id="x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FCONT-20FUNCTION-29"></a>
-
[function] SPLIT-FOLD/CONT SEQ FOLD N-FOLDS
Imagine dividing
SEQ
intoN-FOLDS
subsequences of the same size (bar rounding). Return the subsequence of indexFOLD
as the first value and the all the other subsequences concatenated into one as the second value. The order of elements remains stable. This function is suitable as theSPLIT-FN
argument ofCROSS-VALIDATE
.
<a id="x-28MGL-RESAMPLE-3ASPLIT-STRATIFIED-20FUNCTION-29"></a>
-
[function] SPLIT-STRATIFIED SEQ FOLD N-FOLDS &KEY (KEY #'IDENTITY) (TEST #'EQL) WEIGHT
Split
SEQ
intoN-FOLDS
partitions (as inFRACTURE-STRATIFIED
). Return the partition of indexFOLD
as the first value, and the concatenation of the rest as the second value. This function is suitable as theSPLIT-FN
argument ofCROSS-VALIDATE
(mostly likely as a closure withKEY
,TEST
,WEIGHT
bound).
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-BAGGING-20MGL-PAX-3ASECTION-29"></a>
4.3 Bagging
<a id="x-28MGL-RESAMPLE-3ABAG-20FUNCTION-29"></a>
-
[function] BAG SEQ FN &KEY (RATIO 1) N WEIGHT (REPLACEMENT T) KEY (TEST #'EQL) (RANDOM-STATE *RANDOM-STATE*)
Sample from
SEQ
withSAMPLE-FROM
(passingRATIO
,WEIGHT
,REPLACEMENT
), orSAMPLE-STRATIFIED
ifKEY
is notNIL
. CallFN
with the sample. IfN
isNIL
then keep repeating this untilFN
performs a non-local exit. ElseN
must be a non-negative integer,N
iterations will be performed, the primary values returned byFN
collected into a list and returned. SeeSAMPLE-FROM
andSAMPLE-STRATIFIED
for examples.
<a id="x-28MGL-RESAMPLE-3ASAMPLE-FROM-20FUNCTION-29"></a>
-
[function] SAMPLE-FROM RATIO SEQ &KEY WEIGHT REPLACEMENT (RANDOM-STATE *RANDOM-STATE*)
Return a sequence constructed by sampling with or without
REPLACEMENT
fromSEQ
. The sum of weights in the result sequence will approximately be the sum of weights ofSEQ
timesRATIO
. IfWEIGHT
isNIL
then elements are assumed to have equal weights, elseWEIGHT
should return a non-negative real number when called with an element ofSEQ
.To randomly select half of the elements:
(sample-from 1/2 '(0 1 2 3 4 5)) => (5 3 2)
To randomly select some elements such that the sum of their weights constitute about half of the sum of weights across the whole sequence:
(sample-from 1/2 '(0 1 2 3 4 5 6 7 8 9) :weight #'identity) => ;; sums to 28 that's near 45/2 (9 4 1 6 8)
To sample with replacement (that is, allowing the element to be sampled multiple times):
(sample-from 1 '(0 1 2 3 4 5) :replacement t) => (1 1 5 1 4 4)
<a id="x-28MGL-RESAMPLE-3ASAMPLE-STRATIFIED-20FUNCTION-29"></a>
-
[function] SAMPLE-STRATIFIED RATIO SEQ &KEY WEIGHT REPLACEMENT (KEY #'IDENTITY) (TEST #'EQL) (RANDOM-STATE *RANDOM-STATE*)
Like
SAMPLE-FROM
but makes sure that the weighted proportion of classes in the result is approximately the same as the proportion inSEQ
. SeeSTRATIFY
for the description ofKEY
andTEST
.
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CV-BAGGING-20MGL-PAX-3ASECTION-29"></a>
4.4 CV Bagging
<a id="x-28MGL-RESAMPLE-3ABAG-CV-20FUNCTION-29"></a>
-
[function] BAG-CV DATA FN &KEY N (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN #'SPLIT-FOLD/MOD) PASS-FOLD (RANDOM-STATE *RANDOM-STATE*)
Perform cross-validation on different shuffles of
DATA
N
times and collect the results. SinceCROSS-VALIDATE
collects the return values ofFN
, the return value of this function is a list of lists ofFN
results. IfN
isNIL
, don't collect anything just keep doing repeated CVs untilFN
performs a non-local exit.The following example simply collects the test and training sets for 2-fold CV repeated 3 times with shuffled data:
;;; This is non-deterministic. (bag-cv '(0 1 2 3 4) #'list :n 3 :n-folds 2) => ((((2 3 4) (1 0)) ((1 0) (2 3 4))) (((2 1 0) (4 3)) ((4 3) (2 1 0))) (((1 0 3) (2 4)) ((2 4) (1 0 3))))
CV bagging is useful when a single CV is not producing stable results. As an ensemble method, CV bagging has the advantage over bagging that each example will occur the same number of times and after the first CV is complete there is a complete but less reliable estimate for each example which gets refined by further CVs.
<a id="x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-MISC-20MGL-PAX-3ASECTION-29"></a>
4.5 Miscellaneous Operations
<a id="x-28MGL-RESAMPLE-3ASPREAD-STRATA-20FUNCTION-29"></a>
-
[function] SPREAD-STRATA SEQ &KEY (KEY #'IDENTITY) (TEST #'EQL)
Return a sequence that's a reordering of
SEQ
such that elements belonging to different strata (underKEY
andTEST
, seeSTRATIFY
) are distributed evenly. The order of elements belonging to the same stratum is unchanged.For example, to make sure that even and odd numbers are distributed evenly:
(spread-strata '(0 2 4 6 8 1 3 5 7 9) :key #'evenp) => (0 1 2 3 4 5 6 7 8 9)
Same thing with unbalanced classes:
(spread-strata (vector 0 2 3 5 6 1 4) :key (lambda (x) (if (member x '(1 4)) t nil))) => #(0 1 2 3 4 5 6)
<a id="x-28MGL-RESAMPLE-3AZIP-EVENLY-20FUNCTION-29"></a>
-
[function] ZIP-EVENLY SEQS &KEY RESULT-TYPE
Make a single sequence out of the sequences in
SEQS
so that in the returned sequence indices of elements belonging to the same source sequence are spread evenly across the whole range. The result is a list isRESULT-TYPE
isLIST
(0
1
), it's a vector ifRESULT-TYPE
isVECTOR
(0
1
). IfRESULT-TYPE
isNIL
, then it's determined by the type of the first sequence inSEQS
.(zip-evenly '((0 2 4) (1 3))) => (0 1 2 3 4)
<a id="x-28MGL-CORE-3A-40MGL-CORE-20MGL-PAX-3ASECTION-29"></a>
5 Core
[in package MGL-CORE]
<a id="x-28MGL-CORE-3A-40MGL-PERSISTENCE-20MGL-PAX-3ASECTION-29"></a>
5.1 Persistence
<a id="x-28MGL-CORE-3ALOAD-STATE-20FUNCTION-29"></a>
-
[function] LOAD-STATE FILENAME OBJECT
Load weights of
OBJECT
fromFILENAME
. ReturnOBJECT
.
<a id="x-28MGL-CORE-3ASAVE-STATE-20FUNCTION-29"></a>
-
[function] SAVE-STATE FILENAME OBJECT &KEY (IF-EXISTS :ERROR) (ENSURE T)
Save weights of
OBJECT
toFILENAME
. IfENSURE
, thenENSURE-DIRECTORIES-EXIST
is called onFILENAME
.IF-EXISTS
is passed on toOPEN
. ReturnOBJECT
.
<a id="x-28MGL-CORE-3AREAD-STATE-20FUNCTION-29"></a>
-
[function] READ-STATE OBJECT STREAM
Read the weights of
OBJECT
from the bivalentSTREAM
where weights mean the learnt parameters. There is currently no sanity checking of data which will most certainly change in the future together with the serialization format. ReturnOBJECT
.
<a id="x-28MGL-CORE-3AWRITE-STATE-20FUNCTION-29"></a>
-
[function] WRITE-STATE OBJECT STREAM
Write weight of
OBJECT
to the bivalentSTREAM
. ReturnOBJECT
.
<a id="x-28MGL-CORE-3AREAD-STATE-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] READ-STATE* OBJECT STREAM CONTEXT
This is the extension point for
READ-STATE
. It is guaranteed that primaryREAD-STATE*
methods will be called only once for eachOBJECT
(underEQ
).CONTEXT
is an opaque object and must be passed on to any recursiveREAD-STATE*
calls.
<a id="x-28MGL-CORE-3AWRITE-STATE-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] WRITE-STATE* OBJECT STREAM CONTEXT
This is the extension point for
WRITE-STATE
. It is guaranteed that primaryWRITE-STATE*
methods will be called only once for eachOBJECT
(underEQ
).CONTEXT
is an opaque object and must be passed on to any recursiveWRITE-STATE*
calls.
<a id="x-28MGL-CORE-3A-40MGL-MODEL-STRIPE-20MGL-PAX-3ASECTION-29"></a>
5.2 Batch Processing
Processing instances one by one during training or prediction can be slow. The models that support batch processing for greater efficiency are said to be striped.
Typically, during or after creating a model, one sets MAX-N-STRIPES
on it a positive integer. When a batch of instances is to be fed to
the model it is first broken into subbatches of length that's at
most MAX-N-STRIPES
. For each subbatch, SET-INPUT
(FIXDOC) is called
and a before method takes care of setting N-STRIPES
to the actual
number of instances in the subbatch. When MAX-N-STRIPES
is set
internal data structures may be resized which is an expensive
operation. Setting N-STRIPES
is a comparatively cheap operation,
often implemented as matrix reshaping.
Note that for models made of different parts (for example,
MGL-BP:BPN
consists of MGL-BP:LUMP
s) , setting these
values affects the constituent parts, but one should never change
the number stripes of the parts directly because that would lead to
an internal inconsistency in the model.
<a id="x-28MGL-CORE-3AMAX-N-STRIPES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAX-N-STRIPES OBJECT
The number of stripes with which the
OBJECT
is capable of dealing simultaneously.
<a id="x-28MGL-CORE-3ASET-MAX-N-STRIPES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SET-MAX-N-STRIPES MAX-N-STRIPES OBJECT
Allocate the necessary stuff to allow for
MAX-N-STRIPES
number of stripes to be worked with simultaneously inOBJECT
. This is called whenMAX-N-STRIPES
isSETF
'ed.
<a id="x-28MGL-CORE-3AN-STRIPES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] N-STRIPES OBJECT
The number of stripes currently present in
OBJECT
. This is at mostMAX-N-STRIPES
.
<a id="x-28MGL-CORE-3ASET-N-STRIPES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SET-N-STRIPES N-STRIPES OBJECT
Set the number of stripes (out of
MAX-N-STRIPES
) that are in use inOBJECT
. This is called whenN-STRIPES
isSETF
'ed.
<a id="x-28MGL-CORE-3AWITH-STRIPES-20MGL-PAX-3AMACRO-29"></a>
-
[macro] WITH-STRIPES SPECS &BODY BODY
Bind start and optionally end indices belonging to stripes in striped objects.
(WITH-STRIPES ((STRIPE1 OBJECT1 START1 END1) (STRIPE2 OBJECT2 START2) ...) ...)
This is how one's supposed to find the index range corresponding to the Nth input in an input lump of a bpn:
(with-stripes ((n input-lump start end)) (loop for i upfrom start below end do (setf (mref (nodes input-lump) i) 0d0)))
Note how the input lump is striped, but the matrix into which we are indexing (
NODES
) is not known toWITH-STRIPES
. In fact, for lumps the same stripe indices work withNODES
andMGL-BP:DERIVATIVES
.
<a id="x-28MGL-CORE-3ASTRIPE-START-20GENERIC-FUNCTION-29"></a>
-
[generic-function] STRIPE-START STRIPE OBJECT
Return the start index of
STRIPE
in some array or matrix ofOBJECT
.
<a id="x-28MGL-CORE-3ASTRIPE-END-20GENERIC-FUNCTION-29"></a>
-
[generic-function] STRIPE-END STRIPE OBJECT
Return the end index (exclusive) of
STRIPE
in some array or matrix ofOBJECT
.
<a id="x-28MGL-CORE-3ASET-INPUT-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SET-INPUT INSTANCES MODEL
Set
INSTANCES
as inputs inMODEL
.INSTANCES
is always aSEQUENCE
of instances even for models not capable of batch operation. It setsN-STRIPES
to (LENGTH
INSTANCES
) in a:BEFORE
method.
<a id="x-28MGL-CORE-3AMAP-BATCHES-FOR-MODEL-20FUNCTION-29"></a>
-
[function] MAP-BATCHES-FOR-MODEL FN DATASET MODEL
Call
FN
with batches of instances fromDATASET
suitable forMODEL
. The number of instances in a batch isMAX-N-STRIPES
ofMODEL
or less if there are no more instances left.
<a id="x-28MGL-CORE-3ADO-BATCHES-FOR-MODEL-20MGL-PAX-3AMACRO-29"></a>
-
[macro] DO-BATCHES-FOR-MODEL (BATCH (DATASET MODEL)) &BODY BODY
Convenience macro over
MAP-BATCHES-FOR-MODEL
.
<a id="x-28MGL-CORE-3A-40MGL-EXECUTORS-20MGL-PAX-3ASECTION-29"></a>
5.3 Executors
<a id="x-28MGL-CORE-3AMAP-OVER-EXECUTORS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAP-OVER-EXECUTORS FN INSTANCES PROTOTYPE-EXECUTOR
Divide
INSTANCES
between executors that perform the same function asPROTOTYPE-EXECUTOR
and callFN
with the instances and the executor for which the instances are.Some objects conflate function and call: the forward pass of a
MGL-BP:BPN
computes output from inputs so it is like a function but it also doubles as a function call in the sense that the bpn (function) object changes state during the computation of the output. Hence not even the forward pass of a bpn is thread safe. There is also the restriction that all inputs must be of the same size.For example, if we have a function that builds bpn a for an input of a certain size, then we can create a factory that creates bpns for a particular call. The factory probably wants to keep the weights the same though. In Parameterized Executor Cache,
MAKE-EXECUTOR-WITH-PARAMETERS
is this factory.Parallelization of execution is another possibility
MAP-OVER-EXECUTORS
allows, but there is no prebuilt solution for it, yet.The default implementation simply calls
FN
withINSTANCES
andPROTOTYPE-EXECUTOR
.
<a id="x-28MGL-CORE-3ADO-EXECUTORS-20MGL-PAX-3AMACRO-29"></a>
-
[macro] DO-EXECUTORS (INSTANCES OBJECT) &BODY BODY
Convenience macro on top of
MAP-OVER-EXECUTORS
.
<a id="x-28MGL-CORE-3A-40MGL-PARAMETERIZED-EXECUTOR-CACHE-20MGL-PAX-3ASECTION-29"></a>
5.3.1 Parameterized Executor Cache
<a id="x-28MGL-CORE-3APARAMETERIZED-EXECUTOR-CACHE-MIXIN-20CLASS-29"></a>
-
[class] PARAMETERIZED-EXECUTOR-CACHE-MIXIN
Mix this into a model, implement
INSTANCE-TO-EXECUTOR-PARAMETERS
andMAKE-EXECUTOR-WITH-PARAMETERS
andDO-EXECUTORS
will be to able build executors suitable for different instances. The canonical example is using a BPN to compute the means and convariances of a gaussian process. Since each instance is made of a variable number of observations, the size of the input is not constant, thus we have a bpn (an executor) for each input dimension (the parameters).
<a id="x-28MGL-CORE-3AMAKE-EXECUTOR-WITH-PARAMETERS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAKE-EXECUTOR-WITH-PARAMETERS PARAMETERS CACHE
Create a new executor for
PARAMETERS
.CACHE
is aPARAMETERIZED-EXECUTOR-CACHE-MIXIN
. In the BPN gaussian process example,PARAMETERS
would be a list of input dimensions.
<a id="x-28MGL-CORE-3AINSTANCE-TO-EXECUTOR-PARAMETERS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] INSTANCE-TO-EXECUTOR-PARAMETERS INSTANCE CACHE
Return the parameters for an executor able to handle
INSTANCE
. Called byMAP-OVER-EXECUTORS
onCACHE
(that's aPARAMETERIZED-EXECUTOR-CACHE-MIXIN
). The returned parameters are keys in anEQUAL
parameters->executor hash table.
<a id="x-28MGL-CORE-3A-40MGL-MONITORING-20MGL-PAX-3ASECTION-29"></a>
6 Monitoring
[in package MGL-CORE]
When training or applying a model, one often wants to track various statistics. For example, in the case of training a neural network with cross-entropy loss, these statistics could be the average cross-entropy loss itself, classification accuracy, or even the entire confusion matrix and sparsity levels in hidden layers. Also, there is the question of what to do with the measured values (log and forget, add to some counter or a list).
So there may be several phases of operation when we want to keep an eye on. Let's call these events. There can also be many fairly independent things to do in response to an event. Let's call these monitors. Some monitors are a composition of two operations: one that extracts some measurements and another that aggregates those measurements. Let's call these two measurers and counters, respectively.
For example, consider training a backpropagation neural network. We
want to look at the state of of network just after the backward
pass. MGL-BP:BP-LEARNER
has a MONITORS
event hook corresponding to the moment after
backpropagating the gradients. Suppose we are interested in how the
training cost evolves:
(push (make-instance 'monitor
:measurer (lambda (instances bpn)
(declare (ignore instances))
(mgl-bp:cost bpn))
:counter (make-instance 'basic-counter))
(monitors learner))
During training, this monitor will track the cost of training
examples behind the scenes. If we want to print and reset this
monitor periodically we can put another monitor on
MGL-OPT:ITERATIVE-OPTIMIZER
's MGL-OPT:ON-N-INSTANCES-CHANGED
accessor:
(push (lambda (optimizer gradient-source n-instances)
(declare (ignore optimizer))
(when (zerop (mod n-instances 1000))
(format t "n-instances: ~S~%" n-instances)
(dolist (monitor (monitors gradient-source))
(when (counter monitor)
(format t "~A~%" (counter monitor))
(reset-counter (counter monitor)))))
(mgl-opt:on-n-instances-changed optimizer))
Note that the monitor we push can be anything as long as
APPLY-MONITOR
is implemented on it with the appropriate signature.
Also note that the ZEROP
+ MOD
(0
1
) logic is fragile, so you will likely
want to use MGL-OPT:MONITOR-OPTIMIZATION-PERIODICALLY
instead of
doing the above.
So that's the general idea. Concrete events are documented where they are signalled. Often there are task specific utilities that create a reasonable set of default monitors (see Classification Monitors).
<a id="x-28MGL-CORE-3AAPPLY-MONITORS-20FUNCTION-29"></a>
-
[function] APPLY-MONITORS MONITORS &REST ARGUMENTS
Call
APPLY-MONITOR
on each monitor inMONITORS
andARGUMENTS
. This is how an event is fired.
<a id="x-28MGL-CORE-3AAPPLY-MONITOR-20GENERIC-FUNCTION-29"></a>
-
[generic-function] APPLY-MONITOR MONITOR &REST ARGUMENTS
Apply
MONITOR
toARGUMENTS
. This sound fairly generic, because it is.MONITOR
can be anything, even a simple function or symbol, in which case this is justCL:APPLY
. See Monitors for more.
<a id="x-28MGL-CORE-3ACOUNTER-20GENERIC-FUNCTION-29"></a>
-
[generic-function] COUNTER MONITOR
Return an object representing the state of
MONITOR
orNIL
, if it doesn't have any (say because it's a simple logging function). Most monitors have counters into which they accumulate results until they are printed and reset. See Counters for more.
<a id="x-28MGL-CORE-3AMONITOR-MODEL-RESULTS-20FUNCTION-29"></a>
-
[function] MONITOR-MODEL-RESULTS FN DATASET MODEL MONITORS
Call
FN
with batches of instances fromDATASET
until it runs out (as inDO-BATCHES-FOR-MODEL
).FN
is supposed to applyMODEL
to the batch and return some kind of result (for neural networks, the result is the model state itself). ApplyMONITORS
to each batch and the result returned byFN
for that batch. Finally, return the list of counters ofMONITORS
.The purpose of this function is to collect various results and statistics (such as error measures) efficiently by applying the model only once, leaving extraction of quantities of interest from the model's results to
MONITORS
.See the model specific versions of this functions such as
MGL-BP:MONITOR-BPN-RESULTS
.
<a id="x-28MGL-CORE-3AMONITORS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MONITORS OBJECT
Return monitors associated with
OBJECT
. See various methods such asMONITORS
for more documentation.
<a id="x-28MGL-CORE-3A-40MGL-MONITOR-20MGL-PAX-3ASECTION-29"></a>
6.1 Monitors
<a id="x-28MGL-CORE-3AMONITOR-20CLASS-29"></a>
-
[class] MONITOR
A monitor that has another monitor called
MEASURER
embedded in it. When this monitor is applied, it applies the measurer and passes the returned values toADD-TO-COUNTER
called on itsCOUNTER
slot. One may further specializeAPPLY-MONITOR
to change that.This class is useful when the same event monitor is applied repeatedly over a period and its results must be aggregated such as when training statistics are being tracked or when predictions are begin made. Note that the monitor must be compatible with the event it handles. That is, the embedded
MEASURER
must be prepared to take the arguments that are documented to come with the event.
<a id="x-28MGL-CORE-3AMEASURER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29"></a>
-
[reader] MEASURER MONITOR (:MEASURER)
This must be a monitor itself which only means that
APPLY-MONITOR
is defined on it (but see Monitoring). The returned values are aggregated byCOUNTER
. See Measurers for a library of measurers.
<a id="x-28MGL-CORE-3ACOUNTER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29"></a>
-
[reader] COUNTER MONITOR (:COUNTER)
The
COUNTER
of a monitor carries out the aggregation of results returned byMEASURER
. The See Counters for a library of counters.
<a id="x-28MGL-CORE-3A-40MGL-MEASURER-20MGL-PAX-3ASECTION-29"></a>
6.2 Measurers
MEASURER
is a part of MONITOR
objects, an embedded monitor that
computes a specific quantity (e.g. classification accuracy) from the
arguments of event it is applied to (e.g. the model results).
Measurers are often implemented by combining some kind of model
specific extractor with a generic measurer function.
All generic measurer functions return their results as multiple
values matching the arguments of ADD-TO-COUNTER
for a counter of a
certain type (see Counters) so as to make them easily used in a
MONITOR
:
(multiple-value-call #'add-to-counter <some-counter>
<call-to-some-measurer>)
The counter class compatible with the measurer this way is noted for each function.
For a list of measurer functions see Classification Measurers.
<a id="x-28MGL-CORE-3A-40MGL-COUNTER-20MGL-PAX-3ASECTION-29"></a>
6.3 Counters
<a id="x-28MGL-CORE-3AADD-TO-COUNTER-20GENERIC-FUNCTION-29"></a>
-
[generic-function] ADD-TO-COUNTER COUNTER &REST ARGS
Add
ARGS
toCOUNTER
in some way. See specialized methods for type specific documentation. The kind of arguments to be supported is the what the measurer functions (see Measurers) intended to be paired with the counter return as multiple values.
<a id="x-28MGL-CORE-3ACOUNTER-VALUES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] COUNTER-VALUES COUNTER
Return any number of values representing the state of
COUNTER
. See specialized methods for type specific documentation.
<a id="x-28MGL-CORE-3ACOUNTER-RAW-VALUES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] COUNTER-RAW-VALUES COUNTER
Return any number of values representing the state of
COUNTER
in such a way that passing the returned values as argumentsADD-TO-COUNTER
on a fresh instance of the same type recreates the original state.
<a id="x-28MGL-CORE-3ARESET-COUNTER-20GENERIC-FUNCTION-29"></a>
-
[generic-function] RESET-COUNTER COUNTER
Restore state of
COUNTER
to what it was just after creation.
<a id="x-28MGL-CORE-3A-40MGL-ATTRIBUTES-20MGL-PAX-3ASECTION-29"></a>
6.3.1 Attributes
<a id="x-28MGL-CORE-3AATTRIBUTED-20CLASS-29"></a>
-
[class] ATTRIBUTED
This is a utility class that all counters subclass. The
ATTRIBUTES
plist can hold basically anything. Currently the attributes are only used when printing and they can be specified by the user. The monitor maker functions such as those in Classification Monitors also add attributes of their own to the counters they create.With the
:PREPEND-ATTRIBUTES
initarg when can easily add new attributes without clobbering the those in the:INITFORM
, (:TYPE
"rmse") in this case.(princ (make-instance 'rmse-counter :prepend-attributes '(:event "pred." :dataset "test"))) ;; pred. test rmse: 0.000e+0 (0) => #<RMSE-COUNTER pred. test rmse: 0.000e+0 (0)>
<a id="x-28MGL-CORE-3AATTRIBUTES-20-28MGL-PAX-3AACCESSOR-20MGL-CORE-3AATTRIBUTED-29-29"></a>
-
[accessor] ATTRIBUTES ATTRIBUTED (:ATTRIBUTES = NIL)
A plist of attribute keys and values.
<a id="x-28MGL-COMMON-3ANAME-20-28METHOD-20NIL-20-28MGL-CORE-3AATTRIBUTED-29-29-29"></a>
-
[method] NAME (ATTRIBUTED ATTRIBUTED)
Return a string assembled from the values of the
ATTRIBUTES
ofATTRIBUTED
. If there are multiple entries with the same key, then they are printed near together.Values may be padded according to an enclosing
WITH-PADDED-ATTRIBUTE-PRINTING
.
<a id="x-28MGL-CORE-3AWITH-PADDED-ATTRIBUTE-PRINTING-20MGL-PAX-3AMACRO-29"></a>
-
[macro] WITH-PADDED-ATTRIBUTE-PRINTING (ATTRIBUTEDS) &BODY BODY
Note the width of values for each attribute key which is the number of characters in the value's
PRINC-TO-STRING
'ed representation. InBODY
, if attributes with they same key are printed they are forced to be at least this wide. This allows for nice, table-like output:(let ((attributeds (list (make-instance 'basic-counter :attributes '(:a 1 :b 23 :c 456)) (make-instance 'basic-counter :attributes '(:a 123 :b 45 :c 6))))) (with-padded-attribute-printing (attributeds) (map nil (lambda (attributed) (format t "~A~%" attributed)) attributeds))) ;; 1 23 456: 0.000e+0 (0) ;; 123 45 6 : 0.000e+0 (0)
<a id="x-28MGL-CORE-3ALOG-PADDED-20FUNCTION-29"></a>
-
[function] LOG-PADDED ATTRIBUTEDS
Log (see
LOG-MSG
)ATTRIBUTEDS
non-escaped (as inPRINC
or ~A) with the output being as table-like as possible.
<a id="x-28MGL-CORE-3A-40MGL-COUNTER-CLASSES-20MGL-PAX-3ASECTION-29"></a>
6.3.2 Counter classes
In addition to the really basic ones here, also see Classification Counters.
<a id="x-28MGL-CORE-3ABASIC-COUNTER-20CLASS-29"></a>
-
[class] BASIC-COUNTER ATTRIBUTED
A simple counter whose
ADD-TO-COUNTER
takes two additional parameters: an increment to the internal sums of called theNUMERATOR
andDENOMINATOR
.COUNTER-VALUES
returns two values:-
NUMERATOR
divided byDENOMINATOR
(or 0 ifDENOMINATOR
is 0) and -
DENOMINATOR
Here is an example the compute the mean of 5 things received in two batches:
(let ((counter (make-instance 'basic-counter))) (add-to-counter counter 6.5 3) (add-to-counter counter 3.5 2) counter) => #<BASIC-COUNTER 2.00000e+0 (5)>
-
<a id="x-28MGL-CORE-3ARMSE-COUNTER-20CLASS-29"></a>
-
[class] RMSE-COUNTER BASIC-COUNTER
A
BASIC-COUNTER
with whose nominator accumulates the square of some statistics. It has the attribute:TYPE
"rmse".COUNTER-VALUES
returns the square root of whatBASIC-COUNTER
'sCOUNTER-VALUES
would return.(let ((counter (make-instance 'rmse-counter))) (add-to-counter counter (+ (* 3 3) (* 4 4)) 2) counter) => #<RMSE-COUNTER rmse: 3.53553e+0 (2)>
<a id="x-28MGL-CORE-3ACONCAT-COUNTER-20CLASS-29"></a>
-
[class] CONCAT-COUNTER ATTRIBUTED
A counter that simply concatenates sequences.
```cl-transcript (let ((counter (make-instance 'concat-counter))) (add-to-counter counter '(1 2 3) #(4 5)) (add-to-counter counter '(6 7)) (counter-values counter)) => (1 2 3 4 5 6 7) ````
<a id="x-28MGL-CORE-3ACONCATENATION-TYPE-20-28MGL-PAX-3AREADER-20MGL-CORE-3ACONCAT-COUNTER-29-29"></a>
-
[reader] CONCATENATION-TYPE CONCAT-COUNTER (:CONCATENATION-TYPE = 'LIST)
A type designator suitable as the RESULT-TYPE argument to
CONCATENATE
.
<a id="x-28MGL-CORE-3A-40MGL-CLASSIFICATION-20MGL-PAX-3ASECTION-29"></a>
7 Classification
[in package MGL-CORE]
To be able to measure classification related quantities, we need to define what the label of an instance is. Customization is possible by implementing a method for a specific type of instance, but these functions only ever appear as defaults that can be overridden.
<a id="x-28MGL-CORE-3ALABEL-INDEX-20GENERIC-FUNCTION-29"></a>
-
[generic-function] LABEL-INDEX INSTANCE
Return the label of
INSTANCE
as a non-negative integer.
<a id="x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTION-20GENERIC-FUNCTION-29"></a>
-
[generic-function] LABEL-INDEX-DISTRIBUTION INSTANCE
Return a one dimensional array of probabilities representing the distribution of labels. The probability of the label with
LABEL-INDEX
I
is element at indexI
of the returned arrray.
The following two functions are basically the same as the previous two, but in batch mode: they return a sequence of label indices or distributions. These are called on results produced by models. Implement these for a model and the monitor maker functions below will automatically work. See FIXDOC: for bpn and boltzmann.
<a id="x-28MGL-CORE-3ALABEL-INDICES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] LABEL-INDICES RESULTS
Return a sequence of label indices for
RESULTS
produced by some model for a batch of instances. This is akin toLABEL-INDEX
.
<a id="x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTIONS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] LABEL-INDEX-DISTRIBUTIONS RESULT
Return a sequence of label index distributions for
RESULTS
produced by some model for a batch of instances. This is akin toLABEL-INDEX-DISTRIBUTION
.
<a id="x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MONITOR-20MGL-PAX-3ASECTION-29"></a>
7.1 Classification Monitors
The following functions return a list monitors. The monitors are
for events of signature (INSTANCES
MODEL
) such as those produced by
MONITOR-MODEL-RESULTS
and its various model specific variations.
They are model-agnostic functions, extensible to new classifier
types.
<a id="x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-20FUNCTION-29"></a>
-
[function] MAKE-CLASSIFICATION-ACCURACY-MONITORS MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN #'LABEL-INDEX)
Return a list of
MONITOR
objects associated withCLASSIFICATION-ACCURACY-COUNTER
s.LABEL-INDEX-FN
is a function likeLABEL-INDEX
. See that function for more.Implemented in terms of
MAKE-CLASSIFICATION-ACCURACY-MONITORS*
.
<a id="x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-20FUNCTION-29"></a>
-
[function] MAKE-CROSS-ENTROPY-MONITORS MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-DISTRIBUTION-FN #'LABEL-INDEX-DISTRIBUTION)
Return a list of
MONITOR
objects associated withCROSS-ENTROPY-COUNTER
s.LABEL-INDEX-DISTRIBUTION-FN
is a function likeLABEL-INDEX-DISTRIBUTION
. See that function for more.Implemented in terms of
MAKE-CROSS-ENTROPY-MONITORS*
.
<a id="x-28MGL-CORE-3AMAKE-LABEL-MONITORS-20FUNCTION-29"></a>
-
[function] MAKE-LABEL-MONITORS MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN #'LABEL-INDEX) (LABEL-INDEX-DISTRIBUTION-FN #'LABEL-INDEX-DISTRIBUTION)
Return classification accuracy and cross-entropy monitors. See
MAKE-CLASSIFICATION-ACCURACY-MONITORS
andMAKE-CROSS-ENTROPY-MONITORS
for a description of paramters.
The monitor makers above can be extended to support new classifier types via the following generic functions.
<a id="x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAKE-CLASSIFICATION-ACCURACY-MONITORS* MODEL OPERATION-MODE LABEL-INDEX-FN ATTRIBUTES
Identical to
MAKE-CLASSIFICATION-ACCURACY-MONITORS
bar the keywords arguments. Specialize this to add to support for new model types. The default implementation also allows for some extensibility: ifLABEL-INDICES
is defined onMODEL
, then it will be used to extract label indices from model results.
<a id="x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAKE-CROSS-ENTROPY-MONITORS* MODEL OPERATION-MODE LABEL-INDEX-DISTRIBUTION-FN ATTRIBUTES
Identical to
MAKE-CROSS-ENTROPY-MONITORS
bar the keywords arguments. Specialize this to add to support for new model types. The default implementation also allows for some extensibility: ifLABEL-INDEX-DISTRIBUTIONS
(0
1
) is defined onMODEL
, then it will be used to extract label distributions from model results.
<a id="x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MEASURER-20MGL-PAX-3ASECTION-29"></a>
7.2 Classification Measurers
The functions here compare some known good solution (also known as ground truth or target) to a prediction or approximation and return some measure of their [dis]similarity. They are model independent, hence one has to extract the ground truths and predictions first. Rarely used directly, they are mostly hidden behind Classification Monitors.
<a id="x-28MGL-CORE-3AMEASURE-CLASSIFICATION-ACCURACY-20FUNCTION-29"></a>
-
[function] MEASURE-CLASSIFICATION-ACCURACY TRUTHS PREDICTIONS &KEY (TEST #'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT
Return the number of correct classifications and as the second value the number of instances (equal to length of
TRUTHS
in the non-weighted case).TRUTHS
(keyed byTRUTH-KEY
) is a sequence of opaque class labels compared withTEST
to another sequence of classes labels inPREDICTIONS
(keyed byPREDICTION-KEY
). IfWEIGHT
is non-nil, then it is a function that returns the weight of an element ofTRUTHS
. Weighted cases add their weight to both counts (returned as the first and second values) instead of 1 as in the non-weighted case.Note how the returned values are suitable for
MULTIPLE-VALUE-CALL
with #'ADD-TO-COUNTER
and aCLASSIFICATION-ACCURACY-COUNTER
.
<a id="x-28MGL-CORE-3AMEASURE-CROSS-ENTROPY-20FUNCTION-29"></a>
-
[function] MEASURE-CROSS-ENTROPY TRUTHS PREDICTIONS &KEY TRUTH-KEY PREDICTION-KEY (MIN-PREDICTION-PR 1.0d-15)
Return the sum of the cross-entropy between pairs of elements with the same index of
TRUTHS
andPREDICTIONS
.TRUTH-KEY
is a function that's when applied to an element ofTRUTHS
returns a sequence representing some kind of discrete target distribution (P in the definition below).TRUTH-KEY
may beNIL
which is equivalent to theIDENTITY
function.PREDICTION-KEY
is the same kind of key forPREDICTIONS
, but the sequence it returns represents a distribution that approximates (Q below) the true one.Cross-entropy of the true and approximating distributions is defined as:
cross-entropy(p,q) = - sum_i p(i) * log(q(i))
of which this function returns the sum over the pairs of elements of
TRUTHS
andPREDICTIONS
keyed byTRUTH-KEY
andPREDICTION-KEY
.Due to the logarithm, if q(i) is close to zero, we run into numerical problems. To prevent this, all q(i) that are less than
MIN-PREDICTION-PR
are treated as if they wereMIN-PREDICTION-PR
.The second value returned is the sum of p(i) over all
TRUTHS
and allI
. This is normally equal to(LENGTH TRUTHS)
, since elements ofTRUTHS
represent a probability distribution, but this is not enforced which allows relative importance of elements to be controlled.The third value returned is a plist that maps each index occurring in the distribution sequences to a list of two elements:
sum_j p_j(i) * log(q_j(i))
and
sum_j p_j(i)
where
J
indexes intoTRUTHS
andPREDICTIONS
.(measure-cross-entropy '((0 1 0)) '((0.1 0.7 0.2))) => 0.35667497 1 (2 (0.0 0) 1 (0.35667497 1) 0 (0.0 0))
Note how the returned values are suitable for
MULTIPLE-VALUE-CALL
with #'ADD-TO-COUNTER
and aCROSS-ENTROPY-COUNTER
.
<a id="x-28MGL-CORE-3AMEASURE-ROC-AUC-20FUNCTION-29"></a>
-
[function] MEASURE-ROC-AUC PREDICTIONS PRED &KEY (KEY #'IDENTITY) WEIGHT
Return the area under the ROC curve for
PREDICTIONS
representing predictions for a binary classification problem.PRED
is a predicate function for deciding whether a prediction belongs to the so called positive class.KEY
returns a number for each element which is the predictor's idea of how much that element is likely to belong to the class, although it's not necessarily a probability.If
WEIGHT
isNIL
, then all elements ofPREDICTIONS
count as 1 towards the unnormalized sum within AUC. ElseWEIGHT
must be a function likeKEY
, but it should return the importance (a positive real number) of elements. If the weight of an prediction is 2 then it's as if there were another identical copy of that prediction inPREDICTIONS
.The algorithm is based on algorithm 2 in the paper 'An introduction to ROC analysis' by Tom Fawcett.
ROC AUC is equal to the probability of a randomly chosen positive having higher
KEY
(score) than a randomly chosen negative element. With equal scores in mind, a more precise version is: AUC is the expectation of the above probability over all possible sequences sorted by scores.
<a id="x-28MGL-CORE-3AMEASURE-CONFUSION-20FUNCTION-29"></a>
-
[function] MEASURE-CONFUSION TRUTHS PREDICTIONS &KEY (TEST #'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT
Create a
CONFUSION-MATRIX
fromTRUTHS
andPREDICTIONS
.TRUTHS
(keyed byTRUTH-KEY
) is a sequence of class labels compared withTEST
to another sequence of class labels inPREDICTIONS
(keyed byPREDICTION-KEY
). IfWEIGHT
is non-nil, then it is a function that returns the weight of an element ofTRUTHS
. Weighted cases add their weight to both counts (returned as the first and second values).Note how the returned confusion matrix can be added to another with
ADD-TO-COUNTER
.
<a id="x-28MGL-CORE-3A-40MGL-CLASSIFICATION-COUNTER-20MGL-PAX-3ASECTION-29"></a>
7.3 Classification Counters
<a id="x-28MGL-CORE-3ACLASSIFICATION-ACCURACY-COUNTER-20CLASS-29"></a>
-
[class] CLASSIFICATION-ACCURACY-COUNTER BASIC-COUNTER
A
BASIC-COUNTER
with "acc." as its:TYPE
attribute and aPRINT-OBJECT
method that prints percentages.
<a id="x-28MGL-CORE-3ACROSS-ENTROPY-COUNTER-20CLASS-29"></a>
-
[class] CROSS-ENTROPY-COUNTER BASIC-COUNTER
A
BASIC-COUNTER
with "xent" as its:TYPE
attribute.
<a id="x-28MGL-CORE-3A-40MGL-CONFUSION-MATRIX-20MGL-PAX-3ASECTION-29"></a>
7.3.1 Confusion Matrices
<a id="x-28MGL-CORE-3ACONFUSION-MATRIX-20CLASS-29"></a>
-
[class] CONFUSION-MATRIX
A confusion matrix keeps count of classification results. The correct class is called
target' and the output of the classifier is called
prediction'.
<a id="x-28MGL-CORE-3AMAKE-CONFUSION-MATRIX-20FUNCTION-29"></a>
-
[function] MAKE-CONFUSION-MATRIX &KEY (TEST #'EQL)
Classes are compared with
TEST
.
<a id="x-28MGL-CORE-3ASORT-CONFUSION-CLASSES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SORT-CONFUSION-CLASSES MATRIX CLASSES
Return a list of
CLASSES
sorted for presentation purposes.
<a id="x-28MGL-CORE-3ACONFUSION-CLASS-NAME-20GENERIC-FUNCTION-29"></a>
-
[generic-function] CONFUSION-CLASS-NAME MATRIX CLASS
Name of
CLASS
for presentation purposes.
<a id="x-28MGL-CORE-3ACONFUSION-COUNT-20GENERIC-FUNCTION-29"></a>
- [generic-function] CONFUSION-COUNT MATRIX TARGET PREDICTION
<a id="x-28MGL-CORE-3AMAP-CONFUSION-MATRIX-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAP-CONFUSION-MATRIX FN MATRIX
Call
FN
withTARGET
,PREDICTION
,COUNT
paramaters for each cell in the confusion matrix. Cells with a zero count may be ommitted.
<a id="x-28MGL-CORE-3ACONFUSION-MATRIX-CLASSES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] CONFUSION-MATRIX-CLASSES MATRIX
A list of all classes. The default is to collect classes from the counts. This can be overridden if, for instance, some classes are not present in the results.
<a id="x-28MGL-CORE-3ACONFUSION-MATRIX-ACCURACY-20FUNCTION-29"></a>
-
[function] CONFUSION-MATRIX-ACCURACY MATRIX &KEY FILTER
Return the overall accuracy of the results in
MATRIX
. It's computed as the number of correctly classified cases (hits) divided by the name of cases. Return the number of hits and the number of cases as the second and third value. IfFILTER
function is given, then call it with the target and the prediction of the cell. Disregard cell for whichFILTER
returnsNIL
.Precision and recall can be easily computed by giving the right filter, although those are provided in separate convenience functions.
<a id="x-28MGL-CORE-3ACONFUSION-MATRIX-PRECISION-20FUNCTION-29"></a>
-
[function] CONFUSION-MATRIX-PRECISION MATRIX PREDICTION
Return the accuracy over the cases when the classifier said
PREDICTION
.
<a id="x-28MGL-CORE-3ACONFUSION-MATRIX-RECALL-20FUNCTION-29"></a>
-
[function] CONFUSION-MATRIX-RECALL MATRIX TARGET
Return the accuracy over the cases when the correct class is
TARGET
.
<a id="x-28MGL-CORE-3AADD-CONFUSION-MATRIX-20FUNCTION-29"></a>
-
[function] ADD-CONFUSION-MATRIX MATRIX RESULT-MATRIX
Add
MATRIX
intoRESULT-MATRIX
.
<a id="x-28MGL-CORE-3A-40MGL-FEATURES-20MGL-PAX-3ASECTION-29"></a>
8 Features
[in package MGL-CORE]
<a id="x-28MGL-CORE-3A-40MGL-FEATURE-SELECTION-20MGL-PAX-3ASECTION-29"></a>
8.1 Feature Selection
The following scoring functions all return an EQUAL
hash table
that maps features to scores.
<a id="x-28MGL-CORE-3ACOUNT-FEATURES-20FUNCTION-29"></a>
-
[function] COUNT-FEATURES DOCUMENTS MAPPER &KEY (KEY #'IDENTITY)
Return scored features as an
EQUAL
hash table whose keys are features ofDOCUMENTS
and values are counts of occurrences of features.MAPPER
takes a function and a document and calls function with features of the document.(sort (alexandria:hash-table-alist (count-features '(("hello" "world") ("this" "is" "our" "world")) (lambda (fn document) (map nil fn document)))) #'string< :key #'car) => (("hello" . 1) ("is" . 1) ("our" . 1) ("this" . 1) ("world" . 2))
<a id="x-28MGL-CORE-3AFEATURE-LLRS-20FUNCTION-29"></a>
-
[function] FEATURE-LLRS DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))
Return scored features as an
EQUAL
hash table whose keys are features ofDOCUMENTS
and values are their log likelihood ratios.MAPPER
takes a function and a document and calls function with features of the document.(sort (alexandria:hash-table-alist (feature-llrs '((:a "hello" "world") (:b "this" "is" "our" "world")) (lambda (fn document) (map nil fn (rest document))) #'first)) #'string< :key #'car) => (("hello" . 2.6032386) ("is" . 2.6032386) ("our" . 2.6032386) ("this" . 2.6032386) ("world" . 4.8428774e-8))
<a id="x-28MGL-CORE-3AFEATURE-DISAMBIGUITIES-20FUNCTION-29"></a>
-
[function] FEATURE-DISAMBIGUITIES DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))
Return scored features as an
EQUAL
hash table whose keys are features ofDOCUMENTS
and values are their disambiguities.MAPPER
takes a function and a document and calls function with features of the document.From the paper 'Using Ambiguity Measure Feature Selection Algorithm for Support Vector Machine Classifier'.
<a id="x-28MGL-CORE-3A-40MGL-FEATURE-ENCODING-20MGL-PAX-3ASECTION-29"></a>
8.2 Feature Encoding
Features can rarely be fed directly to algorithms as is, they need
to be transformed in some way. Suppose we have a simple language
model that takes a single word as input and predicts the next word.
However, both input and output is to be encoded as float vectors of
length 1000. What we do is find the top 1000 words by some
measure (see Feature Selection) and associate these words with
the integers in [0..999] (this is ENCODE
ing). By using for
example one-hot encoding, we
translate a word into a float vector when passing in the input. When
the model outputs the probability distribution of the next word, we
find the index of the max and find the word associated with it (this
is DECODE
ing)
<a id="x-28MGL-CORE-3AENCODE-20GENERIC-FUNCTION-29"></a>
-
[generic-function] ENCODE ENCODER DECODED
Encode
DECODED
withENCODER
. This interface is generic enough to be almost meaningless. SeeENCODER/DECODER
for a simple,MGL-NLP:BAG-OF-WORDS-ENCODER
for a slightly more involved example.If
ENCODER
is a function designator, then it's simplyFUNCALL
ed withDECODED
.
<a id="x-28MGL-CORE-3ADECODE-20GENERIC-FUNCTION-29"></a>
-
[generic-function] DECODE DECODER ENCODED
Decode
ENCODED
withENCODER
. For anDECODER
/ENCODER
pair,(DECODE DECODER (ENCODE ENCODER OBJECT))
must be equal in some sense toOBJECT
.If
DECODER
is a function designator, then it's simplyFUNCALL
ed withENCODED
.
<a id="x-28MGL-CORE-3AENCODER-2FDECODER-20CLASS-29"></a>
-
[class] ENCODER/DECODER
Implements O(1)
ENCODE
andDECODE
by having an internal decoded-to-encoded and an encoded-to-decodedEQUAL
hash table.ENCODER/DECODER
objects can be saved and loaded (see Persistence) as long as the elements in the hash tables have read/write consitency.(let ((indexer (make-indexer (alexandria:alist-hash-table '(("I" . 3) ("me" . 2) ("mine" . 1))) 2))) (values (encode indexer "I") (encode indexer "me") (encode indexer "mine") (decode indexer 0) (decode indexer 1) (decode indexer 2))) => 0 => 1 => NIL => "I" => "me" => NIL
<a id="x-28MGL-CORE-3AMAKE-INDEXER-20FUNCTION-29"></a>
-
[function] MAKE-INDEXER SCORED-FEATURES N &KEY (START 0) (CLASS 'ENCODER/DECODER)
Take the top
N
features fromSCORED-FEATURES
(see Feature Selection), assign indices to them starting fromSTART
. Return anENCODER/DECODER
(or anotherCLASS
) that converts between objects and indices.
Also see Bag of Words.
<a id="x-28MGL-OPT-3A-40MGL-OPT-20MGL-PAX-3ASECTION-29"></a>
9 Gradient Based Optimization
[in package MGL-OPT]
We have a real valued, differentiable function F and the task is to find the parameters that minimize its value. Optimization starts from a single point in the parameter space of F, and this single point is updated iteratively based on the gradient and value of F at or around the current point.
Note that while the stated problem is that of global optimization, for non-convex functions, most algorithms will tend to converge to a local optimum.
Currently, there are two optimization algorithms: Gradient Descent (with several variants) and Conjugate Gradient both of which are first order methods (they do not need second order gradients) but more can be added with the Extension API.
<a id="x-28MGL-OPT-3AMINIMIZE-20FUNCTION-29"></a>
-
[function] MINIMIZE OPTIMIZER GRADIENT-SOURCE &KEY (WEIGHTS (LIST-SEGMENTS GRADIENT-SOURCE)) (DATASET *INFINITELY-EMPTY-DATASET*)
Minimize the value of the real valued function represented by
GRADIENT-SOURCE
by updating some of its parameters inWEIGHTS
(aMAT
or a sequence ofMAT
s). ReturnWEIGHTS
.DATASET
(see Datasets) is a set of unoptimized parameters of the same function. For example,WEIGHTS
may be the weights of a neural network whileDATASET
is the training set consisting of inputs suitable forSET-INPUT
. The defaultDATASET
, (*INFINITELY-EMPTY-DATASET*
) is suitable for when all parameters are optimized, so there is nothing left to come from the environment.Optimization terminates if
DATASET
is a sampler and it runs out or when some other condition met (seeTERMINATION
, for example). IfDATASET
is aSEQUENCE
, then it is reused over and over again.Examples for various optimizers are provided in Gradient Descent and Conjugate Gradient.
<a id="x-28MGL-OPT-3A-40MGL-OPT-ITERATIVE-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
9.1 Iterative Optimizer
<a id="x-28MGL-OPT-3AITERATIVE-OPTIMIZER-20CLASS-29"></a>
-
[class] ITERATIVE-OPTIMIZER
An abstract base class of Gradient Descent and Conjugate Gradient based optimizers that iterate over instances until a termination condition is met.
<a id="x-28MGL-OPT-3AN-INSTANCES-20-28MGL-PAX-3AREADER-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29"></a>
-
[reader] N-INSTANCES ITERATIVE-OPTIMIZER (:N-INSTANCES = 0)
The number of instances this optimizer has seen so far. Incremented automatically during optimization.
<a id="x-28MGL-OPT-3ATERMINATION-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29"></a>
-
[accessor] TERMINATION ITERATIVE-OPTIMIZER (:TERMINATION = NIL)
If a number, it's the number of instances to train on in the sense of
N-INSTANCES
. IfN-INSTANCES
is equal or greater than this value optimization stops. IfTERMINATION
isNIL
, then optimization will continue. If it isT
, then optimization will stop. If it is a function of no arguments, then its return value is processed as if it was returned byTERMINATION
.
<a id="x-28MGL-OPT-3AON-OPTIMIZATION-STARTED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29"></a>
-
[accessor] ON-OPTIMIZATION-STARTED ITERATIVE-OPTIMIZER (:ON-OPTIMIZATION-STARTED = NIL)
An event hook with parameters
(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)
. Called after initializations are performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but before optimization is started.
<a id="x-28MGL-OPT-3AON-OPTIMIZATION-FINISHED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29"></a>
-
[accessor] ON-OPTIMIZATION-FINISHED ITERATIVE-OPTIMIZER (:ON-OPTIMIZATION-FINISHED = NIL)
An event hook with parameters
(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)
. Called when optimization has finished.
<a id="x-28MGL-OPT-3AON-N-INSTANCES-CHANGED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29"></a>
-
[accessor] ON-N-INSTANCES-CHANGED ITERATIVE-OPTIMIZER (:ON-N-INSTANCES-CHANGED = NIL)
An event hook with parameters
(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)
. Called when optimization of a batch of instances is done andN-INSTANCES
is incremented.
Now let's discuss a few handy utilities.
<a id="x-28MGL-OPT-3AMONITOR-OPTIMIZATION-PERIODICALLY-20FUNCTION-29"></a>
-
[function] MONITOR-OPTIMIZATION-PERIODICALLY OPTIMIZER PERIODIC-FNS
For each periodic function in the list of
PERIODIC-FNS
, add a monitor toOPTIMIZER
'sON-OPTIMIZATION-STARTED
,ON-OPTIMIZATION-FINISHED
andON-N-INSTANCES-CHANGED
hooks. The monitors are simple functions that just call each periodic function with the event parameters (OPTIMIZER
GRADIENT-SOURCE
N-INSTANCES
). ReturnOPTIMIZER
.To log and reset the monitors of the gradient source after every 1000 instances seen by
OPTIMIZER
:(monitor-optimization-periodically optimizer '((:fn log-my-test-error :period 2000) (:fn reset-optimization-monitors :period 1000 :last-eval 0)))
Note how we don't pass it's allowed to just pass the initargs for a
PERIODIC-FN
instead ofPERIODIC-FN
itself. The:LAST-EVAL
0 bit preventsRESET-OPTIMIZATION-MONITORS
from being called at the start of the optimization when the monitors are empty anyway.
<a id="x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] RESET-OPTIMIZATION-MONITORS OPTIMIZER GRADIENT-SOURCE
Report the state of
MONITORS
ofOPTIMIZER
andGRADIENT-SOURCE
and reset their counters. SeeMONITOR-OPTIMIZATION-PERIODICALLY
for an example of how this is used.
<a id="x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20-28METHOD-20NIL-20-28MGL-OPT-3AITERATIVE-OPTIMIZER-20T-29-29-29"></a>
-
[method] RESET-OPTIMIZATION-MONITORS (OPTIMIZER ITERATIVE-OPTIMIZER) GRADIENT-SOURCE
Log the counters of the monitors of
OPTIMIZER
andGRADIENT-SOURCE
and reset them.
<a id="x-28MGL-OPT-3AREPORT-OPTIMIZATION-PARAMETERS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] REPORT-OPTIMIZATION-PARAMETERS OPTIMIZER GRADIENT-SOURCE
A utility that's often called at the start of optimization (from
ON-OPTIMIZATION-STARTED
). The default implementation logs the description ofGRADIENT-SOURCE
(as inDESCRIBE
) andOPTIMIZER
and callsLOG-MAT-ROOM
.
<a id="x-28MGL-OPT-3A-40MGL-OPT-COST-20MGL-PAX-3ASECTION-29"></a>
9.2 Cost Function
The function being minimized is often called the cost or the loss function.
<a id="x-28MGL-COMMON-3ACOST-20GENERIC-FUNCTION-29"></a>
-
[generic-function] COST MODEL
Return the value of the cost function being minimized. Calling this only makes sense in the context of an ongoing optimization (see
MINIMIZE
). The cost is that of a batch of instances.
<a id="x-28MGL-OPT-3AMAKE-COST-MONITORS-20FUNCTION-29"></a>
-
[function] MAKE-COST-MONITORS MODEL &KEY OPERATION-MODE ATTRIBUTES
Return a list of
MONITOR
objects, each associated with oneBASIC-COUNTER
with attribute:TYPE
"cost". Implemented in terms ofMAKE-COST-MONITORS*
.
<a id="x-28MGL-OPT-3AMAKE-COST-MONITORS-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAKE-COST-MONITORS* MODEL OPERATION-MODE ATTRIBUTES
Identical to
MAKE-COST-MONITORS
bar the keywords arguments. Specialize this to add to support for new model types.
<a id="x-28MGL-GD-3A-40MGL-GD-20MGL-PAX-3ASECTION-29"></a>
9.3 Gradient Descent
[in package MGL-GD]
Gradient descent is a first-order optimization algorithm. Relying completely on first derivatives, it does not even evaluate the function to be minimized. Let's see how to minimize a numerical lisp function with respect to some of its parameters.
<a id="x-28MGL-GD-3ASGD-2ELISP-20-28MGL-PAX-3AINCLUDE-20-23P-22-2Fhome-2Fmelisgl-2Fown-2Fmgl-2Fexample-2Fsgd-2Elisp-22-20-3AHEADER-NL-20-22-60-60-60commonlisp-22-20-3AFOOTER-NL-20-22-60-60-60-22-29-29"></a>
(cl:defpackage :mgl-example-sgd
(:use #:common-lisp #:mgl))
(in-package :mgl-example-sgd)
;;; Create an object representing the sine function.
(defparameter *diff-fn-1*
(make-instance 'mgl-diffun:diffun
:fn #'sin
;; We are going to optimize its only parameter.
:weight-indices '(0)))
;;; Minimize SIN. Note that there is no dataset involved because all
;;; parameters are being optimized.
(minimize (make-instance 'sgd-optimizer :termination 1000)
*diff-fn-1*
:weights (make-mat 1))
;;; => A MAT with a single value of about -pi/2.
;;; Create a differentiable function for f(x,y)=(x-y)^2. X is a
;;; parameter whose values come from the DATASET argument passed to
;;; MINIMIZE. Y is a parameter to be optimized (a 'weight').
(defparameter *diff-fn-2*
(make-instance 'mgl-diffun:diffun
:fn (lambda (x y)
(expt (- x y) 2))
:parameter-indices '(0)
:weight-indices '(1)))
;;; Find the Y that minimizes the distance from the instances
;;; generated by the sampler.
(minimize (make-instance 'sgd-optimizer :batch-size 10)
*diff-fn-2*
:weights (make-mat 1)
:dataset (make-instance 'function-sampler
:generator (lambda ()
(list (+ 10
(gaussian-random-1))))
:max-n-samples 1000))
;;; => A MAT with a single value of about 10, the expected value of
;;; the instances in the dataset.
;;; The dataset can be a SEQUENCE in which case we'd better set
;;; TERMINATION else optimization would never finish.
(minimize (make-instance 'sgd-optimizer :termination 1000)
*diff-fn-2*
:weights (make-mat 1)
:dataset '((0) (1) (2) (3) (4) (5)))
;;; => A MAT with a single value of about 2.5.
We are going to see a number of accessors for optimizer paramaters.
In general, it's allowed to SETF
real slot accessors (as opposed to
readers and writers) at any time during optimization and so is
defining a method on an optimizer subclass that computes the value
in any way. For example, to decay the learning rate on a per
mini-batch basis:
(defmethod learning-rate ((optimizer my-sgd-optimizer))
(* (slot-value optimizer 'learning-rate)
(expt 0.998
(/ (n-instances optimizer) 60000))))
<a id="x-28MGL-GD-3A-40MGL-GD-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
9.3.1 Batch Based Optimizers
First let's see everything common to all batch based optimizers,
then discuss SGD Optimizer, Adam Optimizer and
Normalized Batch Optimizer. All batch based optimizers
are ITERATIVE-OPTIMIZER
s, so see
Iterative Optimizer too.
<a id="x-28MGL-GD-3ABATCH-GD-OPTIMIZER-20CLASS-29"></a>
-
[class] BATCH-GD-OPTIMIZER
Another abstract base class for gradient based optimizers tath updates all weights simultaneously after chewing through
BATCH-SIZE
(0
1
2
) inputs. See subclassesSGD-OPTIMIZER
,ADAM-OPTIMIZER
andNORMALIZED-BATCH-GD-OPTIMIZER
.PER-WEIGHT-BATCH-GD-OPTIMIZER
may be a better choice when some weights can go unused for instance due to missing input values.
<a id="x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] BATCH-SIZE GD-OPTIMIZER (:BATCH-SIZE = 1)
After having gone through
BATCH-SIZE
number of inputs, weights are updated. WithBATCH-SIZE
1, one gets Stochastics Gradient Descent. WithBATCH-SIZE
equal to the number of instances in the dataset, one gets standard, 'batch' gradient descent. WithBATCH-SIZE
between these two extremes, one gets the most practical 'mini-batch' compromise.
<a id="x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] LEARNING-RATE GD-OPTIMIZER (:LEARNING-RATE = 0.1)
This is the step size along the gradient. Decrease it if optimization diverges, increase it if it doesn't make progress.
<a id="x-28MGL-GD-3AMOMENTUM-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] MOMENTUM GD-OPTIMIZER (:MOMENTUM = 0)
A value in the [0, 1) interval.
MOMENTUM
times the previous weight change is added to the gradient. 0 means no momentum.
<a id="x-28MGL-GD-3AMOMENTUM-TYPE-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[reader] MOMENTUM-TYPE GD-OPTIMIZER (:MOMENTUM-TYPE = :NORMAL)
One of
:NORMAL
,:NESTEROV
or:NONE
. For pure optimization Nesterov's momentum may be better, but it may also increases chances of overfitting. Using:NONE
is equivalent to 0 momentum, but it also uses less memory. Note that with:NONE
,MOMENTUM
is ignored even it it is non-zero.
<a id="x-28MGL-GD-3AWEIGHT-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] WEIGHT-DECAY GD-OPTIMIZER (:WEIGHT-DECAY = 0)
An L2 penalty. It discourages large weights, much like a zero mean gaussian prior.
WEIGHT-DECAY
* WEIGHT is added to the gradient to penalize large weights. It's as if the function whose minimum is sought had WEIGHT-DECAY*sum_i{0.5 * WEIGHT_i^2} added to it.
<a id="x-28MGL-GD-3AWEIGHT-PENALTY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] WEIGHT-PENALTY GD-OPTIMIZER (:WEIGHT-PENALTY = 0)
An L1 penalty. It encourages sparsity.
SIGN
(WEIGHT) *WEIGHT-PENALTY
is added to the gradient pushing the weight towards negative infinity. It's as if the function whose minima is sought had WEIGHT-PENALTY*sum_i{abs(WEIGHT_i)} added to it. Putting it on feature biases consitutes a sparsity constraint on the features.
<a id="x-28MGL-GD-3AUSE-SEGMENT-DERIVATIVES-P-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[reader] USE-SEGMENT-DERIVATIVES-P GD-OPTIMIZER (:USE-SEGMENT-DERIVATIVES-P = NIL)
Save memory if both the gradient source (the model being optimized) and the optimizer support this feature. It works like this: the accumulator into which the gradient source is asked to place the derivatives of a segment will be
SEGMENT-DERIVATIVES
of the segment. This allows the optimizer not to allocate an accumulator matrix into which the derivatives are summed.
<a id="x-28MGL-GD-3AAFTER-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29"></a>
-
[accessor] AFTER-UPDATE-HOOK GD-OPTIMIZER (:AFTER-UPDATE-HOOK = NIL)
A list of functions with no arguments called after each weight update.
<a id="x-28MGL-GD-3ABEFORE-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ABATCH-GD-OPTIMIZER-29-29"></a>
-
[accessor] BEFORE-UPDATE-HOOK BATCH-GD-OPTIMIZER (:BEFORE-UPDATE-HOOK = NIL)
A list of functions of no parameters. Each function is called just before a weight update takes place (after accumulated gradients have been divided the length of the batch). Convenient to hang some additional gradient accumulating code on.
<a id="x-28MGL-GD-3A-40MGL-GD-SGD-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
SGD Optimizer
<a id="x-28MGL-GD-3ASGD-OPTIMIZER-20CLASS-29"></a>
-
[class] SGD-OPTIMIZER BATCH-GD-OPTIMIZER
With
BATCH-SIZE
(0
1
2
) 1 this is Stochastic Gradient Descent. With higher batch sizes, one gets mini-batch and Batch Gradient Descent.Assuming that
ACCUMULATOR
has the sum of gradients for a mini-batch, the weight update looks like this:$$ \Delta_w^{t+1} = momentum * \Delta_w^t + \frac{accumulator}{batchsize} + l_2 w + l_1 sign(w) $$
$$ w^{t+1} = w^{t} - learningrate * \Delta_w, $$
which is the same as the more traditional formulation:
$$ \Delta_w^{t+1} = momentum * \Delta_w^{t} + learningrate * \left(\frac{\frac{df}{dw}}{batchsize} + l_2 w + l_1 sign(w)\right) $$
$$ w^{t+1} = w^{t} - \Delta_w, $$
but the former works better when batch size, momentum or learning rate change during the course of optimization. The above is with normal momentum, Nesterov's momentum (see
MOMENTUM-TYPE
) momentum is also available.See Batch Based Optimizers for the description of the various options common to all batch based optimizers.
<a id="x-28MGL-GD-3A-40MGL-GD-ADAM-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
Adam Optimizer
<a id="x-28MGL-GD-3AADAM-OPTIMIZER-20CLASS-29"></a>
-
[class] ADAM-OPTIMIZER BATCH-GD-OPTIMIZER
Adam is a first-order stochasistic gradient descent optimizer. It maintains an internal estimation for the mean and raw variance of each derivative as exponential moving averages. The step it takes is basically
M/(sqrt(V)+E)
whereM
is the estimated mean,V
is the estimated variance, andE
is a small adjustment factor to prevent the gradient from blowing up. See version 5 of the paper for more.Note that using momentum is not supported with Adam. In fact, an error is signalled if it's not
:NONE
.See Batch Based Optimizers for the description of the various options common to all batch based optimizers.
<a id="x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29"></a>
-
[accessor] LEARNING-RATE ADAM-OPTIMIZER (= 2.0e-4)
Same thing as
LEARNING-RATE
but with the default suggested by the Adam paper.
<a id="x-28MGL-GD-3AMEAN-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29"></a>
-
[accessor] MEAN-DECAY ADAM-OPTIMIZER (:MEAN-DECAY = 0.9)
A number between 0 and 1 that determines how fast the estimated mean of derivatives is updated. 0 basically gives you
RMSPROP
(ifVARIANCE-DECAY
is not too large) or AdaGrad (ifVARIANCE-DECAY
is close to 1 and the learning rate is annealed. This is $\beta_1$ in the paper.
<a id="x-28MGL-GD-3AMEAN-DECAY-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29"></a>
-
[accessor] MEAN-DECAY-DECAY ADAM-OPTIMIZER (:MEAN-DECAY-DECAY = (- 1 1.0d-7))
A value that should be close to 1.
MEAN-DECAY
is multiplied by this value after each update. This is $\lambda$ in the paper.
<a id="x-28MGL-GD-3AVARIANCE-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29"></a>
-
[accessor] VARIANCE-DECAY ADAM-OPTIMIZER (:VARIANCE-DECAY = 0.999)
A number between 0 and 1 that determines how fast the estimated variance of derivatives is updated. This is $\beta_2$ in the paper.
<a id="x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29"></a>
-
[accessor] VARIANCE-ADJUSTMENT ADAM-OPTIMIZER (:VARIANCE-ADJUSTMENT = 1.0d-7)
Within the bowels of adam, the estimated mean is divided by the square root of the estimated variance (per weight) which can lead to numerical problems if the denominator is near zero. To avoid this,
VARIANCE-ADJUSTMENT
, which should be a small positive number, is added to the denominator. This isepsilon
in the paper.
<a id="x-28MGL-GD-3A-40MGL-GD-NORMALIZED-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
Normalized Batch Optimizer
<a id="x-28MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-20CLASS-29"></a>
-
[class] NORMALIZED-BATCH-GD-OPTIMIZER BATCH-GD-OPTIMIZER
Like
BATCH-GD-OPTIMIZER
but keeps count of how many times each weight was used in the batch and divides the accumulated gradient by this count instead of dividing byN-INSTANCES-IN-BATCH
. This only makes a difference if there are missing values in the learner that's being trained. The main feature that distuinguishes this class fromPER-WEIGHT-BATCH-GD-OPTIMIZER
is that batches end at same time for all weights.
<a id="x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-29-29"></a>
-
[accessor] N-WEIGHT-USES-IN-BATCH NORMALIZED-BATCH-GD-OPTIMIZER
Number of uses of the weight in its current batch.
<a id="x-28MGL-GD-3A-40MGL-GD-SEGMENTED-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
9.3.2 Segmented GD Optimizer
<a id="x-28MGL-GD-3ASEGMENTED-GD-OPTIMIZER-20CLASS-29"></a>
-
[class] SEGMENTED-GD-OPTIMIZER
An optimizer that delegates training of segments to other optimizers. Useful to delegate training of different segments to different optimizers (capable of working with segmentables) or simply to not train all segments.
<a id="x-28MGL-GD-3ASEGMENTER-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29"></a>
-
[reader] SEGMENTER SEGMENTED-GD-OPTIMIZER (:SEGMENTER)
When this optimizer is initialized it loops over the segment of the learner with
MAP-SEGMENTS
.SEGMENTER
is a function that is called with each segment and returns an optimizer orNIL
. Several segments may be mapped to the same optimizer. After the segment->optimizer mappings are collected, each optimizer is initialized by INITIALIZE-OPTIMIZER with the list of segments mapped to it.
<a id="x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29"></a>
- [reader] SEGMENTS SEGMENTED-GD-OPTIMIZER
SEGMENTED-GD-OPTIMIZER
inherits from ITERATIVE-OPTIMIZER
, so see
Iterative Optimizer too.
<a id="x-28MGL-GD-3A-40MGL-GD-PER-WEIGHT-OPTIMIZATION-20MGL-PAX-3ASECTION-29"></a>
9.3.3 Per-weight Optimization
<a id="x-28MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-20CLASS-29"></a>
-
[class] PER-WEIGHT-BATCH-GD-OPTIMIZER
This is much like Batch Based Optimizers but it is more clever about when to update weights. Basically every weight has its own batch independent from the batches of others. This has desirable properties. One can for example put two neural networks together without adding any connections between them and the learning will produce results equivalent to the separated case. Also, adding inputs with only missing values does not change anything.
Due to its very non-batch nature, there is no CUDA implementation of this optimizer.
<a id="x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-29-29"></a>
-
[accessor] N-WEIGHT-USES-IN-BATCH PER-WEIGHT-BATCH-GD-OPTIMIZER
Number of uses of the weight in its current batch.
<a id="x-28MGL-GD-3A-40MGL-GD-UTILITIES-20MGL-PAX-3ASECTION-29"></a>
9.3.4 Utilities
<a id="x-28MGL-GD-3ACLIP-L2-NORM-20FUNCTION-29"></a>
-
[function] CLIP-L2-NORM MATS L2-UPPER-BOUND &KEY CALLBACK
Scale
MATS
so that their $L_2$ norm does not exceedL2-UPPER-BOUND
.Compute the norm of of
MATS
as if they were a single vector. If the norm is greater thanL2-UPPER-BOUND
, then scale each matrix destructively by the norm divided byL2-UPPER-BOUND
and if non-NIL call the functionCALLBACK
with the scaling factor.
<a id="x-28MGL-GD-3AARRANGE-FOR-CLIPPING-GRADIENTS-20FUNCTION-29"></a>
-
[function] ARRANGE-FOR-CLIPPING-GRADIENTS BATCH-GD-OPTIMIZER L2-UPPER-BOUND &KEY CALLBACK
Make it so that the norm of the batch normalized gradients accumulated by
BATCH-GD-OPTIMIZER
is clipped toL2-UPPER-BOUND
before every update. SeeCLIP-L2-NORM
.
<a id="x-28MGL-CG-3A-40MGL-CG-20MGL-PAX-3ASECTION-29"></a>
9.4 Conjugate Gradient
[in package MGL-CG]
Conjugate gradient is a first-order optimization algorithm. It's more advanced than gradient descent as it does line searches which unfortunately also makes it unsuitable for non-deterministic functions. Let's see how to minimize a numerical lisp function with respect to some of its parameters.
;;; Create an object representing the sine function.
(defparameter *diff-fn-1*
(make-instance 'mgl-diffun:diffun
:fn #'sin
;; We are going to optimize its only parameter.
:weight-indices '(0)))
;;; Minimize SIN. Note that there is no dataset involved because all
;;; parameters are being optimized.
(minimize (make-instance 'cg-optimizer
:batch-size 1
:termination 1)
*diff-fn-1*
:weights (make-mat 1))
;;; => A MAT with a single value of about -pi/2.
;;; Create a differentiable function for f(x,y)=(x-y)^2. X is a
;;; parameter whose values come from the DATASET argument passed to
;;; MINIMIZE. Y is a parameter to be optimized (a 'weight').
(defparameter *diff-fn-2*
(make-instance 'mgl-diffun:diffun
:fn (lambda (x y)
(expt (- x y) 2))
:parameter-indices '(0)
:weight-indices '(1)))
;;; Find the Y that minimizes the distance from the instances
;;; generated by the sampler.
(minimize (make-instance 'cg-optimizer :batch-size 10)
*diff-fn-2*
:weights (make-mat 1)
:dataset (make-instance 'function-sampler
:generator (lambda ()
(list (+ 10
(gaussian-random-1))))
:max-n-samples 1000))
;;; => A MAT with a single value of about 10, the expected value of
;;; the instances in the dataset.
;;; The dataset can be a SEQUENCE in which case we'd better set
;;; TERMINATION else optimization would never finish. Note how a
;;; single epoch suffices.
(minimize (make-instance 'cg-optimizer :termination 6)
*diff-fn-2*
:weights (make-mat 1)
:dataset '((0) (1) (2) (3) (4) (5)))
;;; => A MAT with a single value of about 2.5.
<a id="x-28MGL-CG-3ACG-20FUNCTION-29"></a>
-
[function] CG FN W &KEY (MAX-N-LINE-SEARCHES *DEFAULT-MAX-N-LINE-SEARCHES*) (MAX-N-EVALUATIONS-PER-LINE-SEARCH *DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH*) (MAX-N-EVALUATIONS *DEFAULT-MAX-N-EVALUATIONS*) (SIG *DEFAULT-SIG*) (RHO *DEFAULT-RHO*) (INT *DEFAULT-INT*) (EXT *DEFAULT-EXT*) (RATIO *DEFAULT-RATIO*) SPARE-VECTORS
CG-OPTIMIZER
passes each batch of data to this function with itsCG-ARGS
passed on.Minimize a differentiable multivariate function with conjugate gradient. The Polak-Ribiere flavour of conjugate gradients is used to compute search directions, and a line search using quadratic and cubic polynomial approximations and the Wolfe-Powell stopping criteria is used together with the slope ratio method for guessing initial step sizes. Additionally a bunch of checks are made to make sure that exploration is taking place and that extrapolation will not be unboundedly large.
FN
is a function of two parameters:WEIGHTS
(0
1
) andDERIVATIVES
.WEIGHTS
is aMAT
of the same size asW
that is where the search start from.DERIVATIVES
is also aMAT
of that size and it is whereFN
shall place the partial derivatives.FN
returns the value of the function that is being minimized.CG
performs a number of line searches and invokesFN
at each step. A line search invokesFN
at mostMAX-N-EVALUATIONS-PER-LINE-SEARCH
number of times and can succeed in improving the minimum by the sufficient margin or it can fail. Note, the even a failed line search may improve further and hence change the weights it's just that the improvement was deemed too small.CG
stops when either:-
two line searches fail in a row
-
MAX-N-LINE-SEARCHES
is reached -
MAX-N-EVALUATIONS
is reached
CG
returns aMAT
that contains the best weights, the minimum, the number of line searches performed, the number of succesful line searches and the number of evaluations.When using
MAX-N-EVALUATIONS
remember that there is an extra evaluation ofFN
before the first line search.SPARE-VECTORS
is a list of preallocatedMAT
s of the same size asW
. Passing 6 of them covers the current need of the algorithm and it will not cons up vectors of sizeW
at all.NOTE: If the function terminates within a few iterations, it could be an indication that the function values and derivatives are not consistent (ie, there may be a bug in the implementation of
FN
function).SIG
andRHO
are the constants controlling the Wolfe-Powell conditions.SIG
is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus settingSIG
to low (positive) values forces higher precision in the line-searches.RHO
is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 <RHO
<SIG
< 1. Tuning ofSIG
(depending on the nature of the function to be optimized) may speed up the minimization; it is probably not worth playing much withRHO
. -
<a id="x-28MGL-CG-3A-2ADEFAULT-INT-2A-20VARIABLE-29"></a>
-
[variable] *DEFAULT-INT* 0.1
Don't reevaluate within
INT
of the limit of the current bracket.
<a id="x-28MGL-CG-3A-2ADEFAULT-EXT-2A-20VARIABLE-29"></a>
-
[variable] *DEFAULT-EXT* 3
Extrapolate maximum
EXT
times the current step-size.
<a id="x-28MGL-CG-3A-2ADEFAULT-SIG-2A-20VARIABLE-29"></a>
-
[variable] *DEFAULT-SIG* 0.1
SIG
andRHO
are the constants controlling the Wolfe-Powell conditions.SIG
is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus settingSIG
to low (positive) values forces higher precision in the line-searches.
<a id="x-28MGL-CG-3A-2ADEFAULT-RHO-2A-20VARIABLE-29"></a>
-
[variable] *DEFAULT-RHO* 0.05
RHO
is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 <RHO
<SIG
< 1.
<a id="x-28MGL-CG-3A-2ADEFAULT-RATIO-2A-20VARIABLE-29"></a>
-
[variable] *DEFAULT-RATIO* 10
Maximum allowed slope ratio.
<a id="x-28MGL-CG-3A-2ADEFAULT-MAX-N-LINE-SEARCHES-2A-20VARIABLE-29"></a>
- [variable] *DEFAULT-MAX-N-LINE-SEARCHES* NIL
<a id="x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH-2A-20VARIABLE-29"></a>
- [variable] *DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH* 20
<a id="x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-2A-20VARIABLE-29"></a>
- [variable] *DEFAULT-MAX-N-EVALUATIONS* NIL
<a id="x-28MGL-CG-3ACG-OPTIMIZER-20CLASS-29"></a>
-
[class] CG-OPTIMIZER ITERATIVE-OPTIMIZER
Updates all weights simultaneously after chewing through
BATCH-SIZE
(0
1
2
) inputs.
<a id="x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29"></a>
-
[accessor] BATCH-SIZE CG-OPTIMIZER (:BATCH-SIZE)
After having gone through
BATCH-SIZE
number of instances, weights are updated. Normally,CG
operates on all available data, but it may be useful to introduce some noise into the optimization to reduce overfitting by using smaller batch sizes. IfBATCH-SIZE
is not set, it is initialized to the size of the dataset at the start of optimization.
<a id="x-28MGL-CG-3ACG-ARGS-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29"></a>
- [accessor] CG-ARGS CG-OPTIMIZER (:CG-ARGS = 'NIL)
<a id="x-28MGL-CG-3AON-CG-BATCH-DONE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29"></a>
-
[accessor] ON-CG-BATCH-DONE CG-OPTIMIZER (:ON-CG-BATCH-DONE = NIL)
An event hook called when processing a conjugate gradient batch is done. The handlers on the hook are called with 8 arguments:
(optimizer gradient-source instances best-w best-f n-line-searches n-succesful-line-searches n-evaluations)
The latter 5 of which are the return values of the
CG
function.
<a id="x-28MGL-CG-3ALOG-CG-BATCH-DONE-20GENERIC-FUNCTION-29"></a>
-
[generic-function] LOG-CG-BATCH-DONE OPTIMIZER GRADIENT-SOURCE INSTANCES BEST-W BEST-F N-LINE-SEARCHES N-SUCCESFUL-LINE-SEARCHES N-EVALUATIONS
This is a function can be added to
ON-CG-BATCH-DONE
. The default implementation simply logs the event arguments.
<a id="x-28MGL-CG-3ASEGMENT-FILTER-20-28MGL-PAX-3AREADER-20MGL-CG-3ACG-OPTIMIZER-29-29"></a>
-
[reader] SEGMENT-FILTER CG-OPTIMIZER (:SEGMENT-FILTER = (CONSTANTLY T))
A predicate function on segments that filters out uninteresting segments. Called from
INITIALIZE-OPTIMIZER*
.
<a id="x-28MGL-OPT-3A-40MGL-OPT-EXTENSION-API-20MGL-PAX-3ASECTION-29"></a>
9.5 Extension API
<a id="x-28MGL-OPT-3A-40MGL-OPT-OPTIMIZER-20MGL-PAX-3ASECTION-29"></a>
9.5.1 Implementing Optimizers
The following generic functions must be specialized for new optimizer types.
<a id="x-28MGL-OPT-3AMINIMIZE-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MINIMIZE* OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET
Called by
MINIMIZE
afterINITIALIZE-OPTIMIZER*
andINITIALIZE-GRADIENT-SOURCE*
, this generic function is the main extension point for writing optimizers.
<a id="x-28MGL-OPT-3AINITIALIZE-OPTIMIZER-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] INITIALIZE-OPTIMIZER* OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET
Called automatically before training starts, this function sets up
OPTIMIZER
to be suitable for optimizingGRADIENT-SOURCE
. It typically creates appropriately sized accumulators for the gradients.
<a id="x-28MGL-OPT-3ASEGMENTS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SEGMENTS OPTIMIZER
Several weight matrices known as segments can be optimized by a single optimizer. This function returns them as a list.
The rest are just useful for utilities for implementing optimizers.
<a id="x-28MGL-OPT-3ATERMINATE-OPTIMIZATION-P-20FUNCTION-29"></a>
-
[function] TERMINATE-OPTIMIZATION-P N-INSTANCES TERMINATION
Utility function for subclasses of
ITERATIVE-OPTIMIZER
. It returns whether optimization is to be terminated based onN-INSTANCES
andTERMINATION
that are values of the respective accessors ofITERATIVE-OPTIMIZER
.
<a id="x-28MGL-OPT-3ASET-N-INSTANCES-20FUNCTION-29"></a>
-
[function] SET-N-INSTANCES OPTIMIZER GRADIENT-SOURCE N-INSTANCES
Set
N-INSTANCES
ofOPTIMIZER
and fireON-N-INSTANCES-CHANGED
.ITERATIVE-OPTIMIZER
subclasses must call this to incrementN-INSTANCES
.
<a id="x-28MGL-OPT-3ASEGMENT-SET-20CLASS-29"></a>
-
[class] SEGMENT-SET
This is a utility class for optimizers that have a list of
SEGMENTS
and (the weights being optimized) is able to copy back and forth between those segments and a singleMAT
(the accumulator).
<a id="x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29"></a>
-
[reader] SEGMENTS SEGMENT-SET (:SEGMENTS)
A list of weight matrices.
<a id="x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29"></a>
-
[reader] SIZE SEGMENT-SET
The sum of the sizes of the weight matrices of
SEGMENTS
.
<a id="x-28MGL-OPT-3ADO-SEGMENT-SET-20MGL-PAX-3AMACRO-29"></a>
-
[macro] DO-SEGMENT-SET (SEGMENT &OPTIONAL START) SEGMENT-SET &BODY BODY
Iterate over
SEGMENTS
inSEGMENT-SET
. IfSTART
is specified, the it is bound to the start index ofSEGMENT
withinSEGMENT-SET
. The start index is the sum of the sizes of previous segments.
<a id="x-28MGL-OPT-3ASEGMENT-SET-3C-MAT-20FUNCTION-29"></a>
-
[function] SEGMENT-SET<-MAT SEGMENT-SET MAT
Copy the values of
MAT
to the weight matrices ofSEGMENT-SET
as if they were concatenated into a singleMAT
.
<a id="x-28MGL-OPT-3ASEGMENT-SET--3EMAT-20FUNCTION-29"></a>
-
[function] SEGMENT-SET->MAT SEGMENT-SET MAT
Copy the values of
SEGMENT-SET
toMAT
as if they were concatenated into a singleMAT
.
<a id="x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SOURCE-20MGL-PAX-3ASECTION-29"></a>
9.5.2 Implementing Gradient Sources
Weights can be stored in a multitude of ways. Optimizers need to
update weights, so it is assumed that weights are stored in any
number of MAT
objects called segments.
The generic functions in this section must all be specialized for new gradient sources except where noted.
<a id="x-28MGL-OPT-3AMAP-SEGMENTS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAP-SEGMENTS FN GRADIENT-SOURCE
Apply
FN
to each segment ofGRADIENT-SOURCE
.
<a id="x-28MGL-OPT-3AMAP-SEGMENT-RUNS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAP-SEGMENT-RUNS FN SEGMENT
Call
FN
with start and end of intervals of consecutive indices that are not missing inSEGMENT
. Called by optimizers that support partial updates. The default implementation assumes that all weights are present. This only needs to be specialized if one plans to use an optimizer that knows how to deal unused/missing weights such asMGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER
andOPTIMIZER
MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER
.
<a id="x-28MGL-OPT-3ASEGMENT-WEIGHTS-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SEGMENT-WEIGHTS SEGMENT
Return the weight matrix of
SEGMENT
. A segment doesn't need to be aMAT
object itself. For example, it may be aMGL-BM:CHUNK
of aMGL-BM:BM
or aMGL-BP:LUMP
of aMGL-BP:BPN
whoseNODES
slot holds the weights.
<a id="x-28MGL-OPT-3ASEGMENT-WEIGHTS-20-28METHOD-20NIL-20-28MGL-MAT-3AMAT-29-29-29"></a>
-
[method] SEGMENT-WEIGHTS (MAT MAT)
When the segment is really a
MAT
, then just return it.
<a id="x-28MGL-OPT-3ASEGMENT-DERIVATIVES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] SEGMENT-DERIVATIVES SEGMENT
Return the derivatives matrix of
SEGMENT
. A segment doesn't need to be aMAT
object itself. For example, it may be aMGL-BM:CHUNK
of aMGL-BM:BM
or aMGL-BP:LUMP
of aMGL-BP:BPN
whose DERIVATIVES slot holds the gradient.
<a id="x-28MGL-OPT-3ALIST-SEGMENTS-20FUNCTION-29"></a>
-
[function] LIST-SEGMENTS GRADIENT-SOURCE
A utility function that returns the list of segments from
MAP-SEGMENTS
onGRADIENT-SOURCE
.
<a id="x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] INITIALIZE-GRADIENT-SOURCE* OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET
Called automatically before
MINIMIZE*
is called, this function may be specialized ifGRADIENT-SOURCE
needs some kind of setup.
<a id="x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20-28METHOD-20NIL-20-28T-20T-20T-20T-29-29-29"></a>
-
[method] INITIALIZE-GRADIENT-SOURCE* OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET
The default method does nothing.
<a id="x-28MGL-OPT-3AACCUMULATE-GRADIENTS-2A-20GENERIC-FUNCTION-29"></a>
-
[generic-function] ACCUMULATE-GRADIENTS* GRADIENT-SOURCE SINK BATCH MULTIPLIER VALUEP
Add
MULTIPLIER
times the sum of first-order gradients to accumulators ofSINK
(normally accessed withDO-GRADIENT-SINK
) and ifVALUEP
, return the sum of values of the function being optimized for aBATCH
of instances.GRADIENT-SOURCE
is the object representing the function being optimized,SINK
is gradient sink.Note the number of instances in
BATCH
may be larger than whatGRADIENT-SOURCE
process in one go (in the sense of say,MAX-N-STRIPES
), soDO-BATCHES-FOR-MODEL
or something like (GROUP
BATCH
MAX-N-STRIPES
) can be handy.
<a id="x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SINK-20MGL-PAX-3ASECTION-29"></a>
9.5.3 Implementing Gradient Sinks
Optimizers call ACCUMULATE-GRADIENTS*
on gradient sources. One
parameter of ACCUMULATE-GRADIENTS*
is the SINK
. A gradient sink
knows what accumulator matrix (if any) belongs to a segment. Sinks
are defined entirely by MAP-GRADIENT-SINK
.
<a id="x-28MGL-OPT-3AMAP-GRADIENT-SINK-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAP-GRADIENT-SINK FN SINK
Call
FN
of lambda list (SEGMENT
ACCUMULATOR
) on each segment and their corresponding accumulatorMAT
inSINK
.
<a id="x-28MGL-OPT-3ADO-GRADIENT-SINK-20MGL-PAX-3AMACRO-29"></a>
-
[macro] DO-GRADIENT-SINK ((SEGMENT ACCUMULATOR) SINK) &BODY BODY
A convenience macro on top of
MAP-GRADIENT-SINK
.
<a id="x-28MGL-DIFFUN-3A-40MGL-DIFFUN-20MGL-PAX-3ASECTION-29"></a>
10 Differentiable Functions
[in package MGL-DIFFUN]
<a id="x-28MGL-DIFFUN-3ADIFFUN-20CLASS-29"></a>
-
[class] DIFFUN
DIFFUN
dresses a lisp function (in itsFN
slot) as a gradient source (see Implementing Gradient Sources), which allows it to be used inMINIMIZE
. See the examples in Gradient Descent and Conjugate Gradient.
<a id="x-28MGL-COMMON-3AFN-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29"></a>
-
[reader] FN DIFFUN (:FN)
A real valued lisp function. It may have any number of parameters.
<a id="x-28MGL-DIFFUN-3APARAMETER-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29"></a>
-
[reader] PARAMETER-INDICES DIFFUN (:PARAMETER-INDICES = NIL)
The list of indices of parameters that we don't optimize. Values for these will come from the DATASET argument of
MINIMIZE
.
<a id="x-28MGL-DIFFUN-3AWEIGHT-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29"></a>
-
[reader] WEIGHT-INDICES DIFFUN (:WEIGHT-INDICES = NIL)
The list of indices of parameters to be optimized, the values of which will come from the
WEIGHTS
argument ofMINIMIZE
.
<a id="x-28MGL-BP-3A-40MGL-BP-20MGL-PAX-3ASECTION-29"></a>
11 Backpropagation Neural Networks
[in package MGL-BP]
<a id="x-28MGL-BP-3A-40MGL-BP-OVERVIEW-20MGL-PAX-3ASECTION-29"></a>
11.1 Backprop Overview
Backpropagation Neural Networks are just functions with lots of
parameters called weights and a layered structure when presented
as a computational
graph. The
network is trained to MINIMIZE
some kind of loss function whose
value the network computes.
In this implementation, a BPN
is assembled from several
LUMP
s (roughly corresponding to layers). Both feed-forward and
recurrent neural nets are supported (FNN
and RNN
, respectively).
BPN
s can contain not only LUMP
s but other BPN
s, too. As we
see, networks are composite objects and the abstract base class for
composite and simple parts is called CLUMP
.
<a id="x-28MGL-BP-3ACLUMP-20CLASS-29"></a>
-
[class] CLUMP
A
CLUMP
is aLUMP
or aBPN
. It represents a differentiable function. Arguments of clumps are given during instantiation. Some arguments are clumps themselves so they get permenantly wired together like this:(->v*m (->input :size 10 :name 'input) (->weight :dimensions '(10 20) :name 'weight) :name 'activation)
The above creates three clumps: the vector-matrix multiplication clumps called
ACTIVATION
which has a reference to its operands:INPUT
andWEIGHT
. Note that the example just defines a function, no actual computation has taken place, yet.This wiring of
CLUMP
s is how one builds feed-forward nets (FNN
) or recurrent neural networks (RNN
) that areCLUMP
s themselves so one can build nets in a hiearchical style if desired. Non-compositeCLUMP
s are calledLUMP
(note the loss ofC
that stands for composite). The variousLUMP
subtypes correspond to different layer types (->SIGMOID
,->DROPOUT
,->RELU
,->TANH
, etc).
At this point, you may want to jump ahead to get a feel for how
things work by reading the FNN
Tutorial.
<a id="x-28MGL-BP-3A-40MGL-BP-EXTENSION-API-20MGL-PAX-3ASECTION-29"></a>
11.2 Clump API
These are mostly for extension purposes. About the only thing
needed from here for normal operation is NODES
when clamping inputs
or extracting predictions.
<a id="x-28MGL-BP-3ASTRIPEDP-20GENERIC-FUNCTION-29"></a>
-
[generic-function] STRIPEDP CLUMP
For efficiency, forward and backprop phases do their stuff in batch mode: passing a number of instances through the network in batches. Thus clumps must be able to store values of and gradients for each of these instances. However, some clumps produce the same result for each instance in a batch. These clumps are the weights, the parameters of the network.
STRIPEDP
returns true iffCLUMP
does not represent weights (i.e. it's not a->WEIGHT
).For striped clumps, their
NODES
andDERIVATIVES
areMAT
objects with a leading dimension (number of rows in the 2d case) equal to the number of instances in the batch. Non-striped clumps have no restriction on their shape apart from what their usage dictates.
<a id="x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] NODES OBJECT
Returns a
MAT
object representing the state or result ofOBJECT
. The first dimension of the returned matrix is equal to the number of stripes.
CLUMP
s' NODES
holds the result computed by the most recent
FORWARD
. For ->INPUT
lumps, this is where input values shall be
placed (see SET-INPUT
). Currently, the matrix is always two
dimensional but this restriction may go away in the future.
<a id="x-28MGL-BP-3ADERIVATIVES-20GENERIC-FUNCTION-29"></a>
-
[generic-function] DERIVATIVES CLUMP
Return the
MAT
object representing the partial derivatives of the functionCLUMP
computes. The returned partial derivatives were accumulated by previousBACKWARD
calls.This matrix is shaped like the matrix returned by
NODES
.
<a id="x-28MGL-BP-3AFORWARD-20GENERIC-FUNCTION-29"></a>
-
[generic-function] FORWARD CLUMP
Compute the values of the function represented by
CLUMP
for all stripes and place the results intoNODES
ofCLUMP
.
<a id="x-28MGL-BP-3ABACKWARD-20GENERIC-FUNCTION-29"></a>
-
[generic-function] BACKWARD CLUMP
Compute the partial derivatives of the function represented by
CLUMP
and add them toDERIVATIVES
of the corresponding argument clumps. TheDERIVATIVES
ofCLUMP
contains the sum of partial derivatives of all clumps by the corresponding output. This function is intended to be called after aFORWARD
pass.Take the
->SIGMOID
clump for example when the network is being applied to a batch of two instancesx1
andx2
.x1
andx2
are set in the->INPUT
lump X. The sigmoid computes1/(1+exp(-x))
whereX
is its only argument clump.f(x) = 1/(1+exp(-x))
When
BACKWARD
is called on the sigmoid lump, itsDERIVATIVES
is a 2x1MAT
object that contains the partial derivatives of the loss function:dL(x1)/df dL(x2)/df
Now the
BACKWARD
method of the sigmoid needs to adddL(x1)/dx1
anddL(x2)/dx2
toDERIVATIVES
ofX
. Now,dL(x1)/dx1 = dL(x1)/df * df(x1)/dx1
and the first term is what we have inDERIVATIVES
of the sigmoid so it only needs to calculate the second term.
In addition to the above, clumps also have to support SIZE
(0
1
),
N-STRIPES
, MAX-N-STRIPES
(and the SETF
methods of the latter two)
which can be accomplished just by inheriting from BPN
, FNN
, RNN
, or
a LUMP
.
<a id="x-28MGL-BP-3A-40MGL-BPN-20MGL-PAX-3ASECTION-29"></a>
11.3 BPN
s
<a id="x-28MGL-BP-3ABPN-20CLASS-29"></a>
<a id="x-28MGL-CORE-3AN-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29"></a>
-
[reader] N-STRIPES BPN (:N-STRIPES = 1)
The current number of instances the network has. This is automatically set to the number of instances passed to
SET-INPUT
, so it rarely has to be manipulated directly although it can be set. When setN-STRIPES
of allCLUMPS
(0
1
) get set to the same value.
<a id="x-28MGL-CORE-3AMAX-N-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29"></a>
-
[reader] MAX-N-STRIPES BPN (:MAX-N-STRIPES = NIL)
The maximum number of instances the network can operate on in parallel. Within
BUILD-FNN
orBUILD-RNN
, it defaults toMAX-N-STRIPES
of that parent network, else it defaults to 1. When setMAX-N-STRIPES
of allCLUMPS
(0
1
) get set to the same value.
<a id="x-28MGL-BP-3ACLUMPS-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29"></a>
-
[reader] CLUMPS BPN (:CLUMPS = (MAKE-ARRAY 0 :ELEMENT-TYPE 'CLUMP :ADJUSTABLE T :FILL-POINTER T))
A topological sorted adjustable array with a fill pointer that holds the clumps that make up the network. Clumps are added to it by
ADD-CLUMP
or, more often, automatically when within aBUILD-FNN
orBUILD-RNN
. Rarely needed,FIND-CLUMP
takes care of most uses.
<a id="x-28MGL-BP-3AFIND-CLUMP-20FUNCTION-29"></a>
-
[function] FIND-CLUMP NAME BPN &KEY (ERRORP T)
Find the clump with
NAME
amongCLUMPS
(0
1
) ofBPN
. As always, names are compared withEQUAL
. If not found, then returnNIL
or signal and error depending onERRORP
.
<a id="x-28MGL-BP-3AADD-CLUMP-20FUNCTION-29"></a>
-
[function] ADD-CLUMP CLUMP BPN
Add
CLUMP
toBPN
.MAX-N-STRIPES
ofCLUMP
gets set to that ofBPN
. It is an error to add a clump with a name already used by one of theCLUMPS
ofBPN
.
<a id="x-28MGL-BP-3A-40MGL-BP-TRAINING-20MGL-PAX-3ASECTION-29"></a>
11.3.1 Training
BPN
s are trained to minimize the loss function they compute.
Before a BPN
is passed to MINIMIZE
(as its GRADIENT-SOURCE
argument), it must be wrapped in a BP-LEARNER
object. BP-LEARNER
has
MONITORS
slot which is used for example by
RESET-OPTIMIZATION-MONITORS
.
Without the bells an whistles, the basic shape of training is this:
(minimize optimizer (make-instance 'bp-learner :bpn bpn)
:dataset dataset)
<a id="x-28MGL-BP-3ABP-LEARNER-20CLASS-29"></a>
- [class] BP-LEARNER
<a id="x-28MGL-BP-3ABPN-20-28MGL-PAX-3AREADER-20MGL-BP-3ABP-LEARNER-29-29"></a>
-
[reader] BPN BP-LEARNER (:BPN)
The
BPN
for which thisBP-LEARNER
provides the gradients.
<a id="x-28MGL-CORE-3AMONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ABP-LEARNER-29-29"></a>
-
[accessor] MONITORS BP-LEARNER (:MONITORS = NIL)
A list of
MONITOR
s.
<a id="x-28MGL-BP-3A-40MGL-BP-MONITORING-20MGL-PAX-3ASECTION-29"></a>
11.3.2 Monitoring
<a id="x-28MGL-BP-3AMONITOR-BPN-RESULTS-20FUNCTION-29"></a>
-
[function] MONITOR-BPN-RESULTS DATASET BPN MONITORS
For every batch (of size
MAX-N-STRIPES
ofBPN
) of instances inDATASET
, set the batch as the next input withSET-INPUT
, perform aFORWARD
pass and applyMONITORS
to theBPN
(withAPPLY-MONITORS
(0
1
)). Finally, return the counters ofMONITORS
. This is built on top ofMONITOR-MODEL-RESULTS
.
<a id="x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITORS-20FUNCTION-29"></a>
-
[function] MAKE-STEP-MONITOR-MONITORS RNN &KEY (COUNTER-VALUES-FN #'COUNTER-RAW-VALUES) (MAKE-COUNTER #'MAKE-STEP-MONITOR-MONITOR-COUNTER)
Return a list of monitors, one for every monitor in
STEP-MONITORS
ofRNN
. These monitors extract the results from their warp counterpairs withCOUNTER-VALUES-FN
and add them to their own counter that's created byMAKE-COUNTER
. Wow. Ew. The idea is that one does something like this do monitor warped prediction:(let ((*warp-time* t)) (setf (step-monitors rnn) (make-cost-monitors rnn :attributes '(:event "warped pred."))) (monitor-bpn-results dataset rnn ;; Just collect and reset the warp ;; monitors after each batch of ;; instances. (make-step-monitor-monitors rnn)))
<a id="x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITOR-COUNTER-20GENERIC-FUNCTION-29"></a>
-
[generic-function] MAKE-STEP-MONITOR-MONITOR-COUNTER STEP-COUNTER
In an
RNN
,STEP-COUNTER
aggregates results of all the time steps during the processing of instances in the current batch. Return a new counter into which results fromSTEP-COUNTER
can be accumulated when the processing of the batch is finished. The default implementation creates a copy ofSTEP-COUNTER
.
<a id="x-28MGL-BP-3A-40MGL-FNN-20MGL-PAX-3ASECTION-29"></a>
11.3.3 Feed-Forward Nets
FNN
and RNN
have a lot in common (see their common superclass, BPN
).
There is very limited functionality that's specific to FNN
s so let's
get them out of they way before we study a full example.
<a id="x-28MGL-BP-3AFNN-20CLASS-29"></a>
<a id="x-28MGL-BP-3ABUILD-FNN-20MGL-PAX-3AMACRO-29"></a>
-
[macro] BUILD-FNN (&KEY FNN (CLASS ''FNN) INITARGS MAX-N-STRIPES NAME) &BODY CLUMPS
Syntactic sugar to assemble
FNN
s fromCLUMP
s. LikeLET*
, it is a sequence of bindings (of symbols toCLUMP
s). The names of the clumps created default to the symbol of the binding. In case a clump is not bound to a symbol (because it was created in a nested expression), the local functionCLUMP
can be used to find the clump with the given name in the fnn being built. Example:(build-fnn () (features (->input :size n-features)) (biases (->weight :size n-features)) (weights (->weight :size (* n-hiddens n-features))) (activations0 (->v*m :weights weights :x (clump 'features))) (activations (->+ :args (list biases activations0))) (output (->sigmoid :x activations)))
<a id="x-28MGL-BP-3A-40MGL-FNN-TUTORIAL-20MGL-PAX-3ASECTION-29"></a>
FNN
Tutorial
Hopefully this example from example/digit-fnn.lisp
illustrates
the concepts involved. If it's too dense despite the comments, then
read up on Datasets, Gradient Based Optimization and come back.
<a id="x-28MGL-BP-3ADIGIT-FNN-2ELISP-20-28MGL-PAX-3AINCLUDE-20-23P-22-2Fhome-2Fmelisgl-2Fown-2Fmgl-2Fexample-2Fdigit-fnn-2Elisp-22-20-3AHEADER-NL-20-22-60-60-60commonlisp-22-20-3AFOOTER-NL-20-22-60-60-60-22-29-29"></a>
(cl:defpackage :mgl-example-digit-fnn
(:use #:common-lisp #:mgl))
(in-package :mgl-example-digit-fnn)
;;; There are 10 possible digits used as inputs ...
(defparameter *n-inputs* 10)
;;; and we want to learn the rule that maps the input digit D to (MOD
;;; (1+ D) 3).
(defparameter *n-outputs* 3)
;;; We define a feed-forward net to be able to specialize how inputs
;;; are translated by adding a SET-INPUT method later.
(defclass digit-fnn (fnn)
())
;;; Build a DIGIT-FNN with a single hidden layer of rectified linear
;;; units and a softmax output.
(defun make-digit-fnn (&key (n-hiddens 5))
(build-fnn (:class 'digit-fnn)
(input (->input :size *n-inputs*))
(hidden-activation (->activation input :size n-hiddens))
(hidden (->relu hidden-activation))
(output-activation (->activation hidden :size *n-outputs*))
(output (->softmax-xe-loss output-activation))))
;;; This method is called with batches of 'instances' (input digits in
;;; this case) by MINIMIZE and also by MONITOR-BPN-RESULTS before
;;; performing a forward pass (i.e. computing the value of the
;;; function represented by the network). Its job is to encode the
;;; inputs by populating rows of the NODES matrix of the INPUT clump.
;;;
;;; Each input is encoded as a row of zeros with a single 1 at index
;;; determined by the input digit. This is called one-hot encoding.
;;; The TARGET could be encoded the same way, but instead we use the
;;; sparse option supported by TARGET of ->SOFTMAX-XE-LOSS.
(defmethod set-input (digits (fnn digit-fnn))
(let* ((input (nodes (find-clump 'input fnn)))
(output-lump (find-clump 'output fnn)))
(fill! 0 input)
(loop for i upfrom 0
for digit in digits
do (setf (mref input i digit) 1))
(setf (target output-lump)
(mapcar (lambda (digit)
(mod (1+ digit) *n-outputs*))
digits))))
;;; Train the network by minimizing the loss (cross-entropy here) with
;;; stochastic gradient descent.
(defun train-digit-fnn ()
(let ((optimizer
;; First create the optimizer for MINIMIZE.
(make-instance 'segmented-gd-optimizer
:segmenter
;; We train each weight lump with the same
;; parameters and, in fact, the same
;; optimizer. But it need not be so, in
;; general.
(constantly
(make-instance 'sgd-optimizer
:learning-rate 1
:momentum 0.9
:batch-size 100))))
(fnn (make-digit-fnn)))
;; The number of instances the FNN can work with in parallel. It's
;; usually equal to the batch size or is a its divisor.
(setf (max-n-stripes fnn) 50)
;; Initialize all weights randomly.
(map-segments (lambda (weights)
(gaussian-random! (nodes weights) :stddev 0.01))
fnn)
;; Arrange for training and test error to be logged.
(monitor-optimization-periodically
optimizer '((:fn log-test-error :period 10000)
(:fn reset-optimization-monitors :period 1000)))
;; Finally, start the optimization.
(minimize optimizer
;; Dress FNN in a BP-LEARNER and attach monitors for the
;; cost to it. These monitors are going to be logged and
;; reset after every 100 training instance by
;; RESET-OPTIMIZATION-MONITORS above.
(make-instance 'bp-learner
:bpn fnn
:monitors (make-cost-monitors
fnn :attributes `(:event "train")))
;; Training stops when the sampler runs out (after 10000
;; instances).
:dataset (make-sampler 10000))))
;;; Return a sampler object that produces MAX-N-SAMPLES number of
;;; random inputs (numbers between 0 and 9).
(defun make-sampler (max-n-samples)
(make-instance 'function-sampler :max-n-samples max-n-samples
:generator (lambda () (random *n-inputs*))))
;;; Log the test error. Also, describe the optimizer and the bpn at
;;; the beginning of training. Called periodically during training
;;; (see above).
(defun log-test-error (optimizer learner)
(when (zerop (n-instances optimizer))
(describe optimizer)
(describe (bpn learner)))
(log-padded
(monitor-bpn-results (make-sampler 1000) (bpn learner)
(make-cost-monitors
(bpn learner) :attributes `(:event "pred.")))))
#|
;;; Transcript follows:
(repeatably ()
(let ((*log-time* nil))
(train-digit-fnn)))
.. training at n-instances: 0
.. train cost: 0.000e+0 (0)
.. #<SEGMENTED-GD-OPTIMIZER {100E112E93}>
.. SEGMENTED-GD-OPTIMIZER description:
.. N-INSTANCES = 0
.. OPTIMIZERS = (#<SGD-OPTIMIZER
.. #<SEGMENT-SET
.. (#<->WEIGHT # :SIZE 15 1/1 :NORM 0.04473>
.. #<->WEIGHT # :SIZE 3 1/1 :NORM 0.01850>
.. #<->WEIGHT # :SIZE 50 1/1 :NORM 0.07159>
.. #<->WEIGHT # :SIZE 5 1/1 :NORM 0.03056>)
.. {100E335B73}>
.. {100E06DF83}>)
.. SEGMENTS = (#<->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE
.. 15 1/1 :NORM 0.04473>
.. #<->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE
.. 3 1/1 :NORM 0.01850>
.. #<->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE
.. 50 1/1 :NORM 0.07159>
.. #<->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE
.. 5 1/1 :NORM 0.03056>)
..
.. #<SGD-OPTIMIZER {100E06DF83}>
.. GD-OPTIMIZER description:
.. N-INSTANCES = 0
.. SEGMENT-SET = #<SEGMENT-SET
.. (#<->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE
.. 15 1/1 :NORM 0.04473>
.. #<->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE
.. 3 1/1 :NORM 0.01850>
.. #<->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE
.. 50 1/1 :NORM 0.07159>
.. #<->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE
.. 5 1/1 :NORM 0.03056>)
.. {100E335B73}>
.. LEARNING-RATE = 1.00000e+0
.. MOMENTUM = 9.00000e-1
.. MOMENTUM-TYPE = :NORMAL
.. WEIGHT-DECAY = 0.00000e+0
.. WEIGHT-PENALTY = 0.00000e+0
.. N-AFTER-UPATE-HOOK = 0
.. BATCH-SIZE = 100
..
.. BATCH-GD-OPTIMIZER description:
.. N-BEFORE-UPATE-HOOK = 0
.. #<DIGIT-FNN {100E11A423}>
.. BPN description:
.. CLUMPS = #(#<->INPUT INPUT :SIZE 10 1/50 :NORM 0.00000>
.. #<->ACTIVATION
.. (HIDDEN-ACTIVATION :ACTIVATION) :STRIPES 1/50
.. :CLUMPS 4>
.. #<->RELU HIDDEN :SIZE 5 1/50 :NORM 0.00000>
.. #<->ACTIVATION
.. (OUTPUT-ACTIVATION :ACTIVATION) :STRIPES 1/50
.. :CLUMPS 4>
.. #<->SOFTMAX-XE-LOSS OUTPUT :SIZE 3 1/50 :NORM 0.00000>)
.. N-STRIPES = 1
.. MAX-N-STRIPES = 50
.. pred. cost: 1.100d+0 (1000.00)
.. training at n-instances: 1000
.. train cost: 1.093d+0 (1000.00)
.. training at n-instances: 2000
.. train cost: 5.886d-1 (1000.00)
.. training at n-instances: 3000
.. train cost: 3.574d-3 (1000.00)
.. training at n-instances: 4000
.. train cost: 1.601d-7 (1000.00)
.. training at n-instances: 5000
.. train cost: 1.973d-9 (1000.00)
.. training at n-instances: 6000
.. train cost: 4.882d-10 (1000.00)
.. training at n-instances: 7000
.. train cost: 2.771d-10 (1000.00)
.. training at n-instances: 8000
.. train cost: 2.283d-10 (1000.00)
.. training at n-instances: 9000
.. train cost: 2.123d-10 (1000.00)
.. training at n-instances: 10000
.. train cost: 2.263d-10 (1000.00)
.. pred. cost: 2.210d-10 (1000.00)
..
==> (#<->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE 5 1/1 :NORM 2.94294>
--> #<->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE 50 1/1 :NORM 11.48995>
--> #<->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE 3 1/1 :NORM 3.39103>
--> #<->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE 15 1/1 :NORM 11.39339>)
|#
<a id="x-28MGL-BP-3A-40MGL-RNN-20MGL-PAX-3ASECTION-29"></a>
11.3.4 Recurrent Neural Nets
<a id="x-28MGL-BP-3A-40MGL-RNN-TUTORIAL-20MGL-PAX-3ASECTION-29"></a>
RNN
Tutorial
Hopefully this example from example/sum-sign-fnn.lisp
illustrates
the concepts involved. Make sure you are comfortable with
FNN
Tutorial before reading this.
<a id="x-28MGL-BP-3ASUM-SIG-RNN-2ELISP-20-28MGL-PAX-3AINCLUDE-20-23P-22-2Fhome-2Fmelisgl-2Fown-2Fmgl-2Fexample-2Fsum-sign-rnn-2Elisp-22-20-3AHEADER-NL-20-22-60-60-60commonlisp-22-20-3AFOOTER-NL-20-22-60-60-60-22-29-29"></a>
(cl:defpackage :mgl-example-sum-sign-rnn
(:use #:common-lisp #:mgl))
(in-package :mgl-example-sum-sign-rnn)
;;; There is a single input at each time step...
(defparameter *n-inputs* 1)
;;; and we want to learn the rule that outputs the sign of the sum of
;;; inputs so far in the sequence.
(defparameter *n-outputs* 3)
;;; Generate a training example that's a sequence of random length
;;; between 1 and LENGTH. Elements of the sequence are lists of two
;;; elements:
;;;
;;; 1. The input for the network (a single random number).
;;;
;;; 2. The sign of the sum of inputs so far encoded as 0, 1, 2 (for
;;; negative, zero and positive values). To add a twist, the sum is
;;; reset whenever a negative input is seen.
(defun make-sum-sign-instance (&key (length 10))
(let ((length (max 1 (random length)))
(sum 0))
(loop for i below length
collect (let ((x (1- (* 2 (random 2)))))
(incf sum x)
(when (< x 0)
(setq sum x))
(list x (cond ((minusp sum) 0)
((zerop sum) 1)
(t 2)))))))
;;; Build an RNN with a single lstm hidden layer and softmax output.
;;; For each time step, a SUM-SIGN-FNN will be instantiated.
(defun make-sum-sign-rnn (&key (n-hiddens 1))
(build-rnn ()
(build-fnn (:class 'sum-sign-fnn)
(input (->input :size 1))
(h (->lstm input :name 'h :size n-hiddens))
(prediction (->softmax-xe-loss (->activation h :name 'prediction
:size *n-outputs*))))))
;;; We define this class to be able to specialize how inputs are
;;; translated by adding a SET-INPUT method later.
(defclass sum-sign-fnn (fnn)
())
;;; We have a batch of instances from MAKE-SUM-SIGN-INSTANCE for the
;;; RNN. This function is invoked with elements of these instances
;;; belonging to the same time step (i.e. at the same index) and sets
;;; the input and target up.
(defmethod set-input (instances (fnn sum-sign-fnn))
(let ((input-nodes (nodes (find-clump 'input fnn))))
(setf (target (find-clump 'prediction fnn))
(loop for stripe upfrom 0
for instance in instances
collect
;; Sequences in the batch are not of equal length. The
;; RNN sends a NIL our way if a sequence has run out.
(when instance
(destructuring-bind (input target) instance
(setf (mref input-nodes stripe 0) input)
target))))))
;;; Train the network by minimizing the loss (cross-entropy here) with
;;; the Adam optimizer.
(defun train-sum-sign-rnn ()
(let ((rnn (make-sum-sign-rnn)))
(setf (max-n-stripes rnn) 50)
;; Initialize the weights in the usual sqrt(1 / fan-in) style.
(map-segments (lambda (weights)
(let* ((fan-in (mat-dimension (nodes weights) 0))
(limit (sqrt (/ 6 fan-in))))
(uniform-random! (nodes weights)
:limit (* 2 limit))
(.+! (- limit) (nodes weights))))
rnn)
(minimize (monitor-optimization-periodically
(make-instance 'adam-optimizer
:learning-rate 0.2
:mean-decay 0.9
:mean-decay-decay 0.9
:variance-decay 0.9
:batch-size 100)
'((:fn log-test-error :period 30000)
(:fn reset-optimization-monitors :period 3000)))
(make-instance 'bp-learner
:bpn rnn
:monitors (make-cost-monitors rnn))
:dataset (make-sampler 30000))))
;;; Return a sampler object that produces MAX-N-SAMPLES number of
;;; random inputs.
(defun make-sampler (max-n-samples &key (length 10))
(make-instance 'function-sampler :max-n-samples max-n-samples
:generator (lambda ()
(make-sum-sign-instance :length length))))
;;; Log the test error. Also, describe the optimizer and the bpn at
;;; the beginning of training. Called periodically during training
;;; (see above).
(defun log-test-error (optimizer learner)
(when (zerop (n-instances optimizer))
(describe optimizer)
(describe (bpn learner)))
(let ((rnn (bpn learner)))
(log-padded
(append
(monitor-bpn-results (make-sampler 1000) rnn
(make-cost-monitors
rnn :attributes '(:event "pred.")))
;; Same result in a different way: monitor predictions for
;; sequences up to length 20, but don't unfold the RNN
;; unnecessarily to save memory.
(let ((*warp-time* t))
(monitor-bpn-results (make-sampler 1000 :length 20) rnn
;; Just collect and reset the warp
;; monitors after each batch of
;; instances.
(make-cost-monitors
rnn :attributes '(:event "warped pred."))))))
;; Verify that no further unfoldings took place.
(assert (<= (length (clumps rnn)) 10)))
(log-mat-room))
#|
;;; Transcript follows:
(let (;; Backprop nets do not need double float. Using single floats
;; is faster and needs less memory.
(*default-mat-ctype* :float)
;; Enable moving data in and out of GPU memory so that the RNN
;; can work with sequences so long that the unfolded network
;; wouldn't otherwise fit in the GPU.
(*cuda-window-start-time* 1)
(*log-time* nil))
;; Seed the random number generators.
(repeatably ()
;; Enable CUDA if available.
(with-cuda* ()
(train-sum-sign-rnn))))
.. training at n-instances: 0
.. cost: 0.000e+0 (0)
.. #<ADAM-OPTIMIZER {1006CD5663}>
.. GD-OPTIMIZER description:
.. N-INSTANCES = 0
.. SEGMENT-SET = #<SEGMENT-SET
.. (#<->WEIGHT (H #) :SIZE 1 1/1 :NORM 1.73685>
.. #<->WEIGHT (H #) :SIZE 1 1/1 :NORM 0.31893>
.. #<->WEIGHT (#1=# #2=# :PEEPHOLE) :SIZE
.. 1 1/1 :NORM 1.81610>
.. #<->WEIGHT (H #2#) :SIZE 1 1/1 :NORM 0.21965>
.. #<->WEIGHT (#1# #3=# :PEEPHOLE) :SIZE
.. 1 1/1 :NORM 1.74939>
.. #<->WEIGHT (H #3#) :SIZE 1 1/1 :NORM 0.40377>
.. #<->WEIGHT (H PREDICTION) :SIZE
.. 3 1/1 :NORM 2.15898>
.. #<->WEIGHT (:BIAS PREDICTION) :SIZE
.. 3 1/1 :NORM 2.94470>
.. #<->WEIGHT (#1# #4=# :PEEPHOLE) :SIZE
.. 1 1/1 :NORM 0.97601>
.. #<->WEIGHT (INPUT #4#) :SIZE 1 1/1 :NORM 0.65261>
.. #<->WEIGHT (:BIAS #4#) :SIZE 1 1/1 :NORM 0.37653>
.. #<->WEIGHT (INPUT #1#) :SIZE 1 1/1 :NORM 0.92334>
.. #<->WEIGHT (:BIAS #1#) :SIZE 1 1/1 :NORM 0.01609>
.. #<->WEIGHT (INPUT #5=#) :SIZE 1 1/1 :NORM 1.09995>
.. #<->WEIGHT (:BIAS #5#) :SIZE 1 1/1 :NORM 1.41244>
.. #<->WEIGHT (INPUT #6=#) :SIZE 1 1/1 :NORM 0.40475>
.. #<->WEIGHT (:BIAS #6#) :SIZE 1 1/1 :NORM 1.75358>)
.. {1006CD8753}>
.. LEARNING-RATE = 2.00000e-1
.. MOMENTUM = NONE
.. MOMENTUM-TYPE = :NONE
.. WEIGHT-DECAY = 0.00000e+0
.. WEIGHT-PENALTY = 0.00000e+0
.. N-AFTER-UPATE-HOOK = 0
.. BATCH-SIZE = 100
..
.. BATCH-GD-OPTIMIZER description:
.. N-BEFORE-UPATE-HOOK = 0
..
.. ADAM-OPTIMIZER description:
.. MEAN-DECAY-RATE = 1.00000e-1
.. MEAN-DECAY-RATE-DECAY = 9.00000e-1
.. VARIANCE-DECAY-RATE = 1.00000e-1
.. VARIANCE-ADJUSTMENT = 1.00000d-7
.. #<RNN {10047C77E3}>
.. BPN description:
.. CLUMPS = #(#<SUM-SIGN-FNN :STRIPES 1/50 :CLUMPS 4>
.. #<SUM-SIGN-FNN :STRIPES 1/50 :CLUMPS 4>)
.. N-STRIPES = 1
.. MAX-N-STRIPES = 50
..
.. RNN description:
.. MAX-LAG = 1
.. pred. cost: 1.223e+0 (4455.00)
.. warped pred. cost: 1.228e+0 (9476.00)
.. Foreign memory usage:
.. foreign arrays: 162 (used bytes: 39,600)
.. CUDA memory usage:
.. device arrays: 114 (used bytes: 220,892, pooled bytes: 19,200)
.. host arrays: 162 (used bytes: 39,600)
.. host->device copies: 6,164, device->host copies: 4,490
.. training at n-instances: 3000
.. cost: 3.323e-1 (13726.00)
.. training at n-instances: 6000
.. cost: 3.735e-2 (13890.00)
.. training at n-instances: 9000
.. cost: 1.012e-2 (13872.00)
.. training at n-instances: 12000
.. cost: 3.026e-3 (13953.00)
.. training at n-instances: 15000
.. cost: 9.267e-4 (13948.00)
.. training at n-instances: 18000
.. cost: 2.865e-4 (13849.00)
.. training at n-instances: 21000
.. cost: 8.893e-5 (13758.00)
.. training at n-instances: 24000
.. cost: 2.770e-5 (13908.00)
.. training at n-instances: 27000
.. cost: 8.514e-6 (13570.00)
.. training at n-instances: 30000
.. cost: 2.705e-6 (13721.00)
.. pred. cost: 1.426e-6 (4593.00)
.. warped pred. cost: 1.406e-6 (9717.00)
.. Foreign memory usage:
.. foreign arrays: 216 (used bytes: 52,800)
.. CUDA memory usage:
.. device arrays: 148 (used bytes: 224,428, pooled bytes: 19,200)
.. host arrays: 216 (used bytes: 52,800)
.. host->device copies: 465,818, device->host copies: 371,990
..
==> (#<->WEIGHT (H (H :OUTPUT)) :SIZE 1 1/1 :NORM 0.10624>
--> #<->WEIGHT (H (H :CELL)) :SIZE 1 1/1 :NORM 0.94460>
--> #<->WEIGHT ((H :CELL) (H :FORGET) :PEEPHOLE) :SIZE 1 1/1 :NORM 0.61312>
--> #<->WEIGHT (H (H :FORGET)) :SIZE 1 1/1 :NORM 0.38093>
--> #<->WEIGHT ((H :CELL) (H :INPUT) :PEEPHOLE) :SIZE 1 1/1 :NORM 1.17956>
--> #<->WEIGHT (H (H :INPUT)) :SIZE 1 1/1 :NORM 0.88011>
--> #<->WEIGHT (H PREDICTION) :SIZE 3 1/1 :NORM 49.93808>
--> #<->WEIGHT (:BIAS PREDICTION) :SIZE 3 1/1 :NORM 10.98112>
--> #<->WEIGHT ((H :CELL) (H :OUTPUT) :PEEPHOLE) :SIZE 1 1/1 :NORM 0.67996>
--> #<->WEIGHT (INPUT (H :OUTPUT)) :SIZE 1 1/1 :NORM 0.65251>
--> #<->WEIGHT (:BIAS (H :OUTPUT)) :SIZE 1 1/1 :NORM 10.23003>
--> #<->WEIGHT (INPUT (H :CELL)) :SIZE 1 1/1 :NORM 5.98116>
--> #<->WEIGHT (:BIAS (H :CELL)) :SIZE 1 1/1 :NORM 0.10681>
--> #<->WEIGHT (INPUT (H :FORGET)) :SIZE 1 1/1 :NORM 4.46301>
--> #<->WEIGHT (:BIAS (H :FORGET)) :SIZE 1 1/1 :NORM 1.57195>
--> #<->WEIGHT (INPUT (H :INPUT)) :SIZE 1 1/1 :NORM 0.36401>
--> #<->WEIGHT (:BIAS (H :INPUT)) :SIZE 1 1/1 :NORM 8.63833>)
|#
<a id="x-28MGL-BP-3ARNN-20CLASS-29"></a>
-
[class] RNN BPN
A recurrent neural net (as opposed to a feed-forward one. It is typically built with
BUILD-RNN
that's no more than a shallow convenience macro.An
RNN
takes instances as inputs that are sequences of variable length. At each time step, the next unprocessed elements of these sequences are set as input until all input sequences in the batch run out. To be able to perform backpropagation, all intermediateLUMP
s must be kept around, so the recursive connections are transformed out by unfolding the network. Just how many lumps this means depends on the length of the sequences.When an
RNN
is created,MAX-LAG + 1
BPN
s are instantiated so that all weights are present and one can start training it.
<a id="x-28MGL-BP-3AUNFOLDER-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29"></a>
-
[reader] UNFOLDER RNN (:UNFOLDER)
The
UNFOLDER
of anRNN
is function of no arguments that builds and returns aBPN
. The unfolder is allowed to create networks with arbitrary topology even different ones for differentTIME-STEP
s with the help ofLAG
, or nestedRNN
s. Weights of the same name are shared between the folds. That is, if a->WEIGHT
lump were to be created and a weight lump of the same name already exists, then the existing lump will be added to theBPN
created byUNFOLDER
.
<a id="x-28MGL-BP-3AMAX-LAG-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29"></a>
-
[reader] MAX-LAG RNN (:MAX-LAG = 1)
The networks built by
UNFOLDER
may contain new weights up to time stepMAX-LAG
. Beyond that point, all weight lumps must be reappearances of weight lumps with the same name at previous time steps. Most recurrent networks reference only the state of lumps at the previous time step (with the functionLAG
), hence the default of 1. But it is possible to have connections to arbitrary time steps. The maximum connection lag must be specified when creating theRNN
.
<a id="x-28MGL-BP-3ACUDA-WINDOW-START-TIME-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29"></a>
-
[accessor] CUDA-WINDOW-START-TIME RNN (:CUDA-WINDOW-START-TIME = *CUDA-WINDOW-START-TIME*)
Due to unfolding, the memory footprint of an
RNN
is almost linear in the number of time steps (i.e. the max sequence length). For prediction, this is addressed by Time Warp. For training, we cannot discard results of previous time steps because they are needed for backpropagation, but we can at least move them out of GPU memory if they are not going to be used for a while and copy them back before they are needed. Obviously, this is only relevant if CUDA is being used.If
CUDA-WINDOW-START-TIME
isNIL
, then this feature is turned off. Else, during training, atCUDA-WINDOW-START-TIME
or later time steps, matrices belonging to non-weight lumps may be forced out of GPU memory and later brought back as neeeded.This feature is implemented in terms of
MGL-MAT:WITH-SYNCING-CUDA-FACETS
that uses CUDA host memory (also known as page-locked or pinned memory) to do asynchronous copies concurrently with normal computation. The consequence of this is that it is now main memory usage that's unbounded which toghether with page-locking makes it a potent weapon to bring a machine to a halt. You were warned.
<a id="x-28MGL-BP-3A-2ACUDA-WINDOW-START-TIME-2A-20VARIABLE-29"></a>
-
[variable] *CUDA-WINDOW-START-TIME* NIL
The default for
CUDA-WINDOW-START-TIME
.
<a id="x-28MGL-BP-3ABUILD-RNN-20MGL-PAX-3AMACRO-29"></a>
-
[macro] BUILD-RNN (&KEY RNN (CLASS ''RNN) NAME INITARGS MAX-N-STRIPES (MAX-LAG 1)) &BODY BODY
Create an
RNN
withMAX-N-STRIPES
andMAX-LAG
whoseUNFOLDER
isBODY
wrapped in a lambda. Bind symbol given as theRNN
argument to theRNN
object so thatBODY
can see it.
<a id="x-28MGL-BP-3ALAG-20FUNCTION-29"></a>
-
[function] LAG NAME &KEY (LAG 1) RNN PATH
In
RNN
or if it'sNIL
theRNN
being extended with anotherBPN
(called unfolding), look up theCLUMP
withNAME
in theBPN
that'sLAG
number of time steps before theBPN
being added. If this function is called fromUNFOLDER
of anRNN
(which is what happens behind the scene in the body ofBUILD-RNN
), then it returns an opaque object representing a lagged connection to a clump, else it returns theCLUMP
itself.FIXDOC:
PATH
<a id="x-28MGL-BP-3ATIME-STEP-20FUNCTION-29"></a>
-
[function] TIME-STEP &KEY (RNN *RNN*)
Return the time step
RNN
is currently executing or being unfolded for. It is 0 when theRNN
is being unfolded for the first time.
<a id="x-28MGL-CORE-3ASET-INPUT-20-28METHOD-20NIL-20-28T-20MGL-BP-3ARNN-29-29-29"></a>
-
[method] SET-INPUT INSTANCES (RNN RNN)
RNN
s operate on batches of instances just likeFNN
s. But the instances here are like datasets: sequences or samplers and they are turned into sequences of batches of instances withMAP-DATASETS
(0
1
):IMPUTE
NIL
. The batch of instances at index 2 is clamped onto theBPN
at time step 2 withSET-INPUT
.When the input sequences in the batch are not of the same length, already exhausted sequences will produce
NIL
(due to:IMPUTE
NIL
) above. When such aNIL
is clamped withSET-INPUT
on aBPN
of theRNN
,SET-INPUT
must set theIMPORTANCE
of the ->ERROR lumps to 0 else training would operate on the noise left there by previous invocations.
<a id="x-28MGL-BP-3A-40MGL-RNN-TIME-WARP-20MGL-PAX-3ASECTION-29"></a>
Time Warp
The unbounded memory usage of RNN
s with one BPN
allocated per
time step can become a problem. For training, where the gradients
often have to be backpropagated from the last time step to the very
beginning, this is hard to solve but with CUDA-WINDOW-START-TIME
the
limit is no longer GPU memory.
For prediction on the other hand, one doesn't need to keep old steps around indefinitely: they can be discarded when future time steps will never reference them again.
<a id="x-28MGL-BP-3A-2AWARP-TIME-2A-20VARIABLE-29"></a>
-
[variable] *WARP-TIME* NIL
Controls whether warping is enabled (see Time Warp). Don't enable it for training, as it would make backprop impossible.
<a id="x-28MGL-BP-3AWARPED-TIME-20FUNCTION-29"></a>
-
[function] WARPED-TIME &KEY (RNN *RNN*) (TIME (TIME-STEP :RNN RNN)) (LAG 0)
Return the index of the
BPN
inCLUMPS
(0
1
) ofRNN
whose task it is to execute computation at(- (TIME-STEP RNN) LAG)
. This is normally the same asTIME-STEP
(disregardingLAG
). That is,CLUMPS
can be indexed byTIME-STEP
to get theBPN
. However, when*WARP-TIME*
is true, execution proceeds in a cycle as the structure of the network allows.Suppose we have a typical
RNN
that only ever references the previous time step so itsMAX-LAG
is 1. ItsUNFOLDER
returnsBPN
s of identical structure bar a shift in their time lagged connections except for the very first, soWARP-START
andWARP-LENGTH
are both 1. If*WARP-TIME*
isNIL
, then the mapping fromTIME-STEP
to theBPN
inCLUMPS
is straightforward:time: | 0 | 1 | 2 | 3 | 4 | 5 --------+----+----+----+----+----+---- warped: | 0 | 1 | 2 | 3 | 4 | 5 --------+----+----+----+----+----+---- bpn: | b0 | b1 | b2 | b3 | b4 | b5
When
*WARP-TIME*
is true, we reuse theB1
-B2
bpns in a loop:time: | 0 | 1 | 2 | 3 | 4 | 5 --------+----+----+----+----+----+---- warped: | 0 | 1 | 2 | 1 | 2 | 1 --------+----+----+----+----+----+---- bpn: | b0 | b1 | b2 | b1*| b2 | b1*
B1*
is the sameBPN
asB1
, but its connections created byLAG
go through warped time and end up referencingB2
. This way, memory consumption is independent of the number time steps needed to process a sequence or make predictions.To be able to pull this trick off
WARP-START
andWARP-LENGTH
must be specified when theRNN
is instantiated. In general, with*WARP-TIME*
(+ WARP-START (MAX 2 WARP-LENGTH))
bpns are needed. The 2 comes from the fact that with cycle length 1 a bpn would need to takes its input from itself which is problematic because it hasNODES
for only one set of values.
<a id="x-28MGL-BP-3AWARP-START-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29"></a>
-
[reader] WARP-START RNN (:WARP-START = 1)
The
TIME-STEP
from whichUNFOLDER
will createBPN
s that essentially repeat everyWARP-LENGTH
steps.
<a id="x-28MGL-BP-3AWARP-LENGTH-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29"></a>
-
[reader] WARP-LENGTH RNN (:WARP-LENGTH = 1)
An integer such that the
BPN
UNFOLDER
creates at time stepI
(where(<= WARP-START I)
) is identical to theBPN
created at time step(+ WARP-START (MOD (- I WARP-START) WARP-LENGTH))
except for a shift in its time lagged connections.
<a id="x-28MGL-BP-3ASTEP-MONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29"></a>
-
[accessor] STEP-MONITORS RNN (:STEP-MONITORS = NIL)
During training, unfolded
BPN
s corresponding to previous time steps may be expensive to get at because they are no longer in GPU memory. This consideration also applies to making prediction with the additional caveat that with*WARP-TIME*
true, previous states are discarded so it's not possible to gather statistics afterFORWARD
finished.Add monitor objects to this slot and they will be automatically applied to the
RNN
after each step whenFORWARD
ing theRNN
during training or prediction. To be able to easily switch between sets of monitors, in addition to a list of monitors this can be a symbol or a function, too. If it's a symbol, then its a designator for itsSYMBOL-VALUE
. If it's a function, then it must have no arguments and it's a designator for its return value.
<a id="x-28MGL-BP-3A-40MGL-BP-LUMPS-20MGL-PAX-3ASECTION-29"></a>
11.4 Lumps
<a id="x-28MGL-BP-3A-40MGL-BP-LUMP-20MGL-PAX-3ASECTION-29"></a>
11.4.1 Lump Base Class
<a id="x-28MGL-BP-3ALUMP-20CLASS-29"></a>
-
[class] LUMP CLUMP
A
LUMP
is a simple, layerlike component of a neural network. There are many kinds of lumps, each of which performs a specific operation or just stores inputs and weights. By convention, the names of lumps start with the prefix->
. Defined as classes, they also have a function of the same name as the class to create them easily. These maker functions typically have keyword arguments corresponding to initargs of the class, with some (mainly the input lumps) turned into normal positional arguments. So instead of having to do(make-instance '->tanh :x some-input :name 'my-tanh)
one can simply write
(->tanh some-input :name 'my-tanh)
Lumps instantiated in any way within a
BUILD-FNN
orBUILD-RNN
are automatically added to the network being built.A lump has its own
NODES
andDERIVATIVES
matrices allocated for it in which the results of the forward and backward passes are stored. This is in contrast to aBPN
whoseNODES
andDERIVATIVES
are those of its last constituentCLUMP
.Since lumps almost always live within a
BPN
, theirN-STRIPES
andMAX-N-STRIPES
are handled automagically behind the scenes.
<a id="x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29"></a>
-
[reader] SIZE LUMP (:SIZE)
The number of values in a single stripe.
<a id="x-28MGL-COMMON-3ADEFAULT-VALUE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29"></a>
-
[reader] DEFAULT-VALUE LUMP (:DEFAULT-VALUE = 0)
Upon creation or resize the lump's nodes get filled with this value.
<a id="x-28MGL-BP-3ADEFAULT-SIZE-20GENERIC-FUNCTION-29"></a>
-
[generic-function] DEFAULT-SIZE LUMP
Return a default for the
SIZE
ofLUMP
if one is not supplied at instantiation. The value is often computed based on the sizes of the inputs. This function is for implementing new lump types.
<a id="x-28MGL-COMMON-3ANODES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29"></a>
-
[reader] NODES LUMP (= NIL)
The values computed by the lump in the forward pass are stored here. It is an
N-STRIPES * SIZE
matrix that has storage allocated forMAX-N-STRIPES * SIZE
elements for non-weight lumps.->WEIGHT
lumps have no stripes nor restrictions on their shape.
<a id="x-28MGL-BP-3ADERIVATIVES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29"></a>
-
[reader] DERIVATIVES LUMP
The derivatives computed in the backward pass are stored here. This matrix is very much like
NODES
in shape and size.
<a id="x-28MGL-BP-3A-40MGL-BP-INPUTS-20MGL-PAX-3ASECTION-29"></a>
11.4.2 Inputs
<a id="x-28MGL-BP-3A-40MGL-BP-INPUT-LUMP-20MGL-PAX-3ASECTION-29"></a>
Input Lump
<a id="x-28MGL-BP-3A--3EINPUT-20CLASS-29"></a>
-
[class] ->INPUT ->DROPOUT
A lump that has no input lumps, does not change its values in the forward pass (except when
DROPOUT
is non-zero), and does not compute derivatives. Clamp inputs onNODES
of input lumps inSET-INPUT
.For convenience,
->INPUT
can perform dropout itself although it defaults to no dropout.(->input :size 10 :name 'some-input) ==> #<->INPUT SOME-INPUT :SIZE 10 1/1 :NORM 0.00000>
<a id="x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EINPUT-29-29"></a>
-
[accessor] DROPOUT ->INPUT (= NIL)
See
DROPOUT
.
<a id="x-28MGL-BP-3A-40MGL-BP-EMBEDDING-LUMP-20MGL-PAX-3ASECTION-29"></a>
Embedding Lump
This lump is like an input and a simple activation molded together in the name of efficiency.
<a id="x-28MGL-BP-3A--3EEMBEDDING-20CLASS-29"></a>
-
[class] ->EMBEDDING LUMP
Select rows of
WEIGHTS
(0
1
), one row for each index inINPUT-ROW-INDICES
. This lump is equivalent to adding an->INPUT
lump with a one hot encoding scheme and a->V*M
lump on top of it, but it is more efficient in execution and in memory usage, because it works with a sparse representation of the input.The
SIZE
(0
1
) of this lump is the number of columns ofWEIGHTS
which is determined automatically.(->embedding :weights (->weight :name 'embedding-weights :dimensions '(3 5)) :name 'embeddings) ==> #<->EMBEDDING EMBEDDINGS :SIZE 5 1/1 :NORM 0.00000>
<a id="x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EEMBEDDING-29-29"></a>
-
[reader] WEIGHTS ->EMBEDDING (:WEIGHTS)
A weight lump whose rows indexed by
INPUT-ROW-INDICES
are copied to the output of this lump.
<a id="x-28MGL-BP-3AINPUT-ROW-INDICES-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EEMBEDDING-29-29"></a>
-
[reader] INPUT-ROW-INDICES ->EMBEDDING (:INPUT-ROW-INDICES)
A sequence of batch size length of row indices. To be set in
SET-INPUT
.
<a id="x-28MGL-BP-3A-40MGL-BP-WEIGHT-LUMP-20MGL-PAX-3ASECTION-29"></a>
11.4.3 Weight Lump
<a id="x-28MGL-BP-3A--3EWEIGHT-20CLASS-29"></a>
-
[class] ->WEIGHT LUMP
A set of optimizable parameters of some kind. When a
BPN
is is trained (see Training) theNODES
of weight lumps will be changed. Weight lumps perform no computation.Weights can be created by specifying the total size or the dimensions:
(dimensions (->weight :size 10 :name 'w)) => (1 10) (dimensions (->weight :dimensions '(5 10) :name 'w)) => (5 10)
<a id="x-28MGL-BP-3ADIMENSIONS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EWEIGHT-29-29"></a>
-
[reader] DIMENSIONS ->WEIGHT (:DIMENSIONS)
NODES
andDERIVATIVES
of this lump will be allocated with these dimensions.
<a id="x-28MGL-BP-3AWITH-WEIGHTS-COPIED-20MGL-PAX-3AMACRO-29"></a>
-
[macro] WITH-WEIGHTS-COPIED (FROM-BPN) &BODY BODY
In
BODY
->WEIGHT
will first look up if a weight lump of the same name exists inFROM-BPN
and return that, or else create a weight lump normally. IfFROM-BPN
isNIL
, then no weights are copied.
<a id="x-28MGL-BP-3A-40MGL-BP-ACTIVATIONS-20MGL-PAX-3ASECTION-29"></a>
11.4.4 Activations
<a id="x-28MGL-BP-3A-40MGL-BP-ACTIVATION-SUBNET-20MGL-PAX-3ASECTION-29"></a>
Activation Subnet
So we have some inputs. Usually the next step is to multiply the
input vector with a weight matrix and add biases. This can be done
directly with ->+, ->V*M
and ->WEIGHT
, but it's more convenient to
use activation subnets to reduce the clutter.
<a id="x-28MGL-BP-3A--3EACTIVATION-20CLASS-29"></a>
-
[class] ->ACTIVATION BPN
Activation subnetworks are built by the function
->ACTIVATION
and they have a number of lumps hidden inside them. Ultimately, this subnetwork computes a sum likesum_i x_i * W_i + sum_j y_j .* V_j + biases
wherex_i
are input lumps,W_i
are dense matrices representing connections, whileV_j
are peephole connection vectors that are mulitplied in an elementwise manner with their corresponding inputy_j
.
<a id="x-28MGL-BP-3A--3EACTIVATION-20FUNCTION-29"></a>
-
[function] ->ACTIVATION INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES (ADD-BIAS-P T)
Create a subnetwork of class
->ACTIVATION
that computes the over activation from dense connection from lumps inINPUTS
, and elementwise connection from lumps inPEEPHOLES
. Create new->WEIGHT
lumps as necessary.INPUTS
andPEEPHOLES
can be a single lump or a list of lumps. Finally, ifADD-BIAS-P
, then add an elementwise bias too.SIZE
must be specified explicitly, because it is not possible to determine it unless there are peephole connections.(->activation (->input :size 10 :name 'input) :name 'h1 :size 4) ==> #<->ACTIVATION (H1 :ACTIVATION) :STRIPES 1/1 :CLUMPS 4>
This is the basic workhorse of neural networks which takes care of the linear transformation whose results and then fed to some non-linearity (
->SIGMOID
,->TANH
, etc).The name of the subnetwork clump is
(,NAME :ACTIVATION)
. The bias weight lump (if any) is named(:BIAS ,NAME)
. Dense connection weight lumps are named are named after the input andNAME
:(,(NAME INPUT) ,NAME)
, while peepholes weight lumps are named(,(NAME INPUT) ,NAME :PEEPHOLE)
. This is useful to know if, for example, they are to be initialized differently.
<a id="x-28MGL-BP-3A-40MGL-BP-BATCH-NORMALIZATION-20MGL-PAX-3ASECTION-29"></a>
Batch-Normalization
<a id="x-28MGL-BP-3A--3EBATCH-NORMALIZED-20CLASS-29"></a>
-
[class] ->BATCH-NORMALIZED LUMP
This is an implementation of v3 of the Batch Normalization paper. The output of
->BATCH-NORMALIZED
is its input normalized so that for all elements the mean across stripes is zero and the variance is 1. That is, the mean of the batch is subtracted from the inputs and they are rescaled by their sample stddev. Actually, after the normalization step the values are rescaled and shifted (but this time with learnt parameters) in order to keep the representational power of the model the same. The primary purpose of this lump is to speed up learning, but it also acts as a regularizer. See the paper for the details.To normalize the output of
LUMP
without no additional regularizer effect:(->batch-normalized lump :batch-size :use-population)
The above uses an exponential moving average to estimate the mean and variance of batches and these estimations are used at both training and test time. In contrast to this, the published version uses the sample mean and variance of the current batch at training time which injects noise into the process. The noise is higher for lower batch sizes and has a regularizing effect. This is the default behavior (equivalent to
:BATCH-SIZE NIL
):(->batch-normalized lump)
For performance reasons one may wish to process a higher number of instances in a batch (in the sense of
N-STRIPES
) and get the regularization effect associated with a lower batch size. This is possible by setting:BATCH-SIZE
to a divisor of the the number of stripes. Say, the number of stripes is 128, but we want as much regularization as we would get with 32:(->batch-normalized lump :batch-size 32)
The primary input of
->BATCH-NORMALIZED
is often an->ACTIVATION
(0
1
) and its output is fed into an activation function (see Activation Functions).
<a id="x-28MGL-BP-3ABATCH-NORMALIZATION-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZED-29-29"></a>
-
[reader] BATCH-NORMALIZATION ->BATCH-NORMALIZED (:NORMALIZATION)
The
->BATCH-NORMALIZATION
of this lump. May be shared between multiple->BATCH-NORMALIZED
lumps.Batch normalization is special in that it has state apart from the computed results (
NODES
) and its derivatives (DERIVATIVES
). This state is the estimated mean and variance of its inputs and they are encapsulated by->BATCH-NORMALIZATION
.If
NORMALIZATION
is not given at instantiation, then a new->BATCH-NORMALIZATION
object will be created automatically, passing:BATCH-SIZE
,:VARIANCE-ADJUSTMENT
, and:POPULATION-DECAY
arguments on to->BATCH-NORMALIZATION
. SeeBATCH-SIZE
,VARIANCE-ADJUSTMENT
andPOPULATION-DECAY
. New scale and shift weight lumps will be created with names:`(,name :scale) `(,name :shift)
where
NAME
is theNAME
(0
1
) of this lump.This default behavior covers the use-case where the statistics kept by
->BATCH-NORMALIZATION
are to be shared only between time steps of anRNN
.
<a id="x-28MGL-BP-3A--3EBATCH-NORMALIZATION-20CLASS-29"></a>
-
[class] ->BATCH-NORMALIZATION ->WEIGHT
The primary purpose of this class is to hold the estimated mean and variance of the inputs to be normalized and allow them to be shared between multiple
->BATCH-NORMALIZED
lumps that carry out the computation. These estimations are saved and loaded bySAVE-STATE
andLOAD-STATE
.(->batch-normalization (->weight :name '(h1 :scale) :size 10) (->weight :name '(h1 :shift) :size 10) :name '(h1 :batch-normalization))
<a id="x-28MGL-COMMON-3ASCALE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29"></a>
-
[reader] SCALE ->BATCH-NORMALIZATION (:SCALE)
A weight lump of the same size as
SHIFT
. This is $\gamma$ in the paper.
<a id="x-28MGL-BP-3ASHIFT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29"></a>
-
[reader] SHIFT ->BATCH-NORMALIZATION (:SHIFT)
A weight lump of the same size as
SCALE
. This is $\beta$ in the paper.
<a id="x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29"></a>
-
[reader] BATCH-SIZE ->BATCH-NORMALIZATION (:BATCH-SIZE = NIL)
Normally all stripes participate in the batch. Lowering the number of stripes may increase the regularization effect, but it also makes the computation less efficient. By setting
BATCH-SIZE
to a divisor ofN-STRIPES
one can decouple the concern of efficiency from that of regularization. The default value,NIL
, is equivalent toN-STRIPES
.BATCH-SIZE
only affects training.With the special value
:USE-POPULATION
, instead of the mean and the variance of the current batch, use the population statistics for normalization. This effectively cancels the regularization effect, leaving only the faster learning.
<a id="x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29"></a>
-
[reader] VARIANCE-ADJUSTMENT ->BATCH-NORMALIZATION (:VARIANCE-ADJUSTMENT = 1.0e-4)
A small positive real number that's added to the sample variance. This is $\epsilon$ in the paper.
<a id="x-28MGL-BP-3APOPULATION-DECAY-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29"></a>
-
[reader] POPULATION-DECAY ->BATCH-NORMALIZATION (:POPULATION-DECAY = 0.99)
While training, an exponential moving average of batch means and standard deviances (termed population statistics) is updated. When making predictions, normalization is performed using these statistics. These population statistics are persisted by
SAVE-STATE
.
<a id="x-28MGL-BP-3A--3EBATCH-NORMALIZED-ACTIVATION-20FUNCTION-29"></a>
-
[function] ->BATCH-NORMALIZED-ACTIVATION INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES BATCH-SIZE VARIANCE-ADJUSTMENT POPULATION-DECAY
A utility functions that creates and wraps an
->ACTIVATION
(0
1
) in->BATCH-NORMALIZED
and with itsBATCH-NORMALIZATION
the two weight lumps for the scale and shift parameters.(->BATCH-NORMALIZED-ACTIVATION INPUTS :NAME 'H1 :SIZE 10)
is equivalent to:(->batch-normalized (->activation inputs :name 'h1 :size 10 :add-bias-p nil) :name '(h1 :batch-normalized-activation))
Note how biases are turned off since normalization will cancel them anyway (but a shift is added which amounts to the same effect).
<a id="x-28MGL-BP-3A-40MGL-BP-ACTIVATION-FUNCTIONS-20MGL-PAX-3ASECTION-29"></a>
11.4.5 Activation Functions
Now we are moving on to the most important non-linearities to which activations are fed.
<a id="x-28MGL-BP-3A-40MGL-BP-SIGMOID-LUMP-20MGL-PAX-3ASECTION-29"></a>
Sigmoid Lump
<a id="x-28MGL-BP-3A--3ESIGMOID-20CLASS-29"></a>
-
[class] ->SIGMOID ->DROPOUT LUMP
Applies the
1/(1 + e^{-x})
function elementwise to its inputs. This is one of the classic non-linearities for neural networks.For convenience,
->SIGMOID
can perform dropout itself although it defaults to no dropout.(->sigmoid (->activation (->input :size 10) :size 5) :name 'this) ==> #<->SIGMOID THIS :SIZE 5 1/1 :NORM 0.00000>
The
SIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESIGMOID-29-29"></a>
-
[accessor] DROPOUT ->SIGMOID (= NIL)
See
DROPOUT
.
<a id="x-28MGL-BP-3A-40MGL-BP-TANH-LUMP-20MGL-PAX-3ASECTION-29"></a>
Tanh Lump
<a id="x-28MGL-BP-3A--3ETANH-20CLASS-29"></a>
-
[class] ->TANH LUMP
Applies the
TANH
function to its input in an elementwise manner. TheSIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-BP-3A-40MGL-BP-SCALED-TANH-LUMP-20MGL-PAX-3ASECTION-29"></a>
Scaled Tanh Lump
<a id="x-28MGL-BP-3A--3ESCALED-TANH-20CLASS-29"></a>
-
[class] ->SCALED-TANH LUMP
Pretty much like
TANH
but its input and output is scaled in such a way that the variance of its output is close to 1 if the variance of its input is close to 1 which is a nice property to combat vanishing gradients. The actual function is1.7159 * tanh(2/3 * x)
. TheSIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-BP-3A-40MGL-BP-RELU-LUMP-20MGL-PAX-3ASECTION-29"></a>
Relu Lump
We are somewhere around year 2007 by now.
<a id="x-28MGL-BP-3A--3ERELU-20CLASS-29"></a>
-
[class] ->RELU LUMP
max(0,x)
activation function. Be careful, relu units can get stuck in the off state: if they move to far to negative territory it can be very difficult to get out of it. TheSIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-BP-3A-40MGL-BP-MAX-LUMP-20MGL-PAX-3ASECTION-29"></a>
Max Lump
We are in about year 2011.
<a id="x-28MGL-BP-3A--3EMAX-20CLASS-29"></a>
-
[class] ->MAX LUMP
This is basically maxout without dropout (see http://arxiv.org/abs/1302.4389). It groups its inputs by
GROUP-SIZE
, and outputs the maximum of each group. TheSIZE
(0
1
) of the output is automatically calculated, it is the size of the input divided byGROUP-SIZE
.(->max (->input :size 120) :group-size 3 :name 'my-max) ==> #<->MAX MY-MAX :SIZE 40 1/1 :NORM 0.00000 :GROUP-SIZE 3>
The advantage of
->MAX
over->RELU
is that flow gradient is never stopped so there is no problem of units getting stuck in off state.
<a id="x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-29-29"></a>
-
[reader] GROUP-SIZE ->MAX (:GROUP-SIZE)
The number of inputs in each group.
<a id="x-28MGL-BP-3A-40MGL-BP-MIN-LUMP-20MGL-PAX-3ASECTION-29"></a>
Min Lump
<a id="x-28MGL-BP-3A--3EMIN-20CLASS-29"></a>
<a id="x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMIN-29-29"></a>
-
[reader] GROUP-SIZE ->MIN (:GROUP-SIZE)
The number of inputs in each group.
<a id="x-28MGL-BP-3A-40MGL-BP-MAX-CHANNEL-LUMP-20MGL-PAX-3ASECTION-29"></a>
Max-Channel Lump
<a id="x-28MGL-BP-3A--3EMAX-CHANNEL-20CLASS-29"></a>
-
[class] ->MAX-CHANNEL LUMP
Called LWTA (Local Winner Take All) or Channel-Out (see http://arxiv.org/abs/1312.1909) in the literature it is basically
->MAX
, but instead of producing one output per group, it just produces zeros for all unit but the one with the maximum value in the group. This allows the next layer to get some information about the path along which information flowed. TheSIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-CHANNEL-29-29"></a>
-
[reader] GROUP-SIZE ->MAX-CHANNEL (:GROUP-SIZE)
The number of inputs in each group.
<a id="x-28MGL-BP-3A-40MGL-BP-LOSSES-20MGL-PAX-3ASECTION-29"></a>
11.4.6 Losses
Ultimately, we need to tell the network what to learn which means that the loss function to be minimized needs to be constructed as part of the network.
<a id="x-28MGL-BP-3A-40MGL-BP-LOSS-LUMP-20MGL-PAX-3ASECTION-29"></a>
Loss Lump
<a id="x-28MGL-BP-3A--3ELOSS-20CLASS-29"></a>
-
[class] ->LOSS ->SUM
Calculate the loss for the instances in the batch. The main purpose of this lump is to provide a training signal.
An error lump is usually a leaf in the graph of lumps (i.e. there are no other lumps whose input is this one). The special thing about error lumps is that 1 (but see
IMPORTANCE
) is added automatically to their derivatives. Error lumps have exactly one node (per stripe) whose value is computed as the sum of nodes in their input lump.
<a id="x-28MGL-BP-3AIMPORTANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ELOSS-29-29"></a>
-
[accessor] IMPORTANCE ->LOSS (:IMPORTANCE = NIL)
This is to support weighted instances. That is when not all training instances are equally important. If non-NIL, a 1d
MAT
with the importances of stripes of the batch. WhenIMPORTANCE
is given (typically inSET-INPUT
), then instead of adding 1 to the derivatives of all stripes,IMPORTANCE
is added elemtwise.
<a id="x-28MGL-BP-3A-40MGL-BP-SQUARED-DIFFERENCE-LUMP-20MGL-PAX-3ASECTION-29"></a>
Squared Difference Lump
In regression, the squared error loss is most common. The squared
error loss can be constructed by combining ->SQUARED-DIFFERENCE
with
a ->LOSS
.
<a id="x-28MGL-BP-3A--3ESQUARED-DIFFERENCE-20CLASS-29"></a>
-
[class] ->SQUARED-DIFFERENCE LUMP
This lump takes two input lumps and calculates their squared difference
(x - y)^2
in an elementwise manner. TheSIZE
(0
1
) of this lump is automatically determined from the size of its inputs. This lump is often fed into->LOSS
that sums the squared differences and makes it part of the function to be minimized.(->loss (->squared-difference (->activation (->input :size 100) :size 10) (->input :name 'target :size 10)) :name 'squared-error) ==> #<->LOSS SQUARED-ERROR :SIZE 1 1/1 :NORM 0.00000>
Currently this lump is not CUDAized, but it will copy data from the GPU if it needs to.
<a id="x-28MGL-BP-3A-40MGL-BP-SOFTMAX-XE-LOSS-LUMP-20MGL-PAX-3ASECTION-29"></a>
Softmax Cross-Entropy Loss Lump
<a id="x-28MGL-BP-3A--3ESOFTMAX-XE-LOSS-20CLASS-29"></a>
-
[class] ->SOFTMAX-XE-LOSS LUMP
A specialized lump that computes the softmax of its input in the forward pass and backpropagates a cross-entropy loss. The advantage of doing these together is numerical stability. The total cross-entropy is the sum of cross-entropies per group of
GROUP-SIZE
elements:$$ XE(x) = - \sum_{i=1,g} t_i \ln(s_i), $$
where
g
is the number of classes (GROUP-SIZE
),t_i
are the targets (i.e. the true probabilities of the class, often all zero but one),s_i
is the output of softmax calculated from inputX
:$$ s_i = {softmax}(x_1, x_2, ..., x_g) = \frac{e^x_i}{\sum_{j=1,g} e^x_j} $$
In other words, in the forward phase this lump takes input
X
, computes its elementwiseEXP
, normalizes each group ofGROUP-SIZE
elements to sum to 1 to get the softmax which is the result that goes intoNODES
. In the backward phase, there are two sources of gradients: the lumps that use the output of this lump as their input (currently not implemented and would result in an error) and an implicit cross-entropy loss.One can get the cross-entropy calculated in the most recent forward pass by calling
COST
on this lump.This is the most common loss function for classification. In fact, it is nearly ubiquitous. See the
FNN
Tutorial and theRNN
Tutorial for how this loss andSET-INPUT
work together.
<a id="x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29"></a>
-
[reader] GROUP-SIZE ->SOFTMAX-XE-LOSS (:GROUP-SIZE)
The number of elements in a softmax group. This is the number of classes for classification. Often
GROUP-SIZE
is equal toSIZE
(0
1
) (it is the default), but in general the only constraint is thatSIZE
is a multiple ofGROUP-SIZE
.
<a id="x-28MGL-COMMON-3ATARGET-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29"></a>
-
[accessor] TARGET ->SOFTMAX-XE-LOSS (:TARGET = NIL)
Set in
SET-INPUT
, this is either aMAT
of the same size as the input lumpX
or if the target is very sparse, this can also be a sequence of batch size length that contains the index value pairs of non-zero entries:(;; first instance in batch has two non-zero targets (;; class 10 has 30% expected probability (10 . 0.3) ;; class 2 has 70% expected probability (2 . 0.7)) ;; second instance in batch puts 100% on class 7 7 ;; more instances in the batch follow ...)
Actually, in the rare case where
GROUP-SIZE
is notSIZE
(0
1
) (i.e. there are several softmax normalization groups for every example), the length of the above target sequence isBATCH-SIZE
(0
1
2
) * N-GROUPS. Indices are always relative to the start of the group.If
GROUP-SIZE
is large (for example, in neural language models with a huge number of words), using sparse targets can make things go much faster, because calculation of the derivative is no longer quadratic.Giving different weights to training instances is implicitly supported. While target values in a group should sum to 1, multiplying all target values with a weight
W
is equivalent to training thatW
times on the same example.
<a id="x-28MGL-BP-3AENSURE-SOFTMAX-TARGET-MATRIX-20FUNCTION-29"></a>
-
[function] ENSURE-SOFTMAX-TARGET-MATRIX SOFTMAX-XE-LOSS N
Set
TARGET
ofSOFTMAX-XE-LOSS
to aMAT
capable of holding the dense target values forN
stripes.
<a id="x-28MGL-BP-3A-40MGL-BP-STOCHASTICITY-20MGL-PAX-3ASECTION-29"></a>
11.4.7 Stochasticity
<a id="x-28MGL-BP-3A-40MGL-BP-DROPOUT-LUMP-20MGL-PAX-3ASECTION-29"></a>
Dropout Lump
<a id="x-28MGL-BP-3A--3EDROPOUT-20CLASS-29"></a>
-
[class] ->DROPOUT LUMP
The output of this lump is identical to its input, except it randomly zeroes out some of them during training which act as a very strong regularizer. See Geoffrey Hinton's 'Improving neural networks by preventing co-adaptation of feature detectors'.
The
SIZE
(0
1
) of this lump is the size of its input which is determined automatically.
<a id="x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EDROPOUT-29-29"></a>
-
[accessor] DROPOUT ->DROPOUT (:DROPOUT = 0.5)
If non-NIL, then in the forward pass zero out each node in this chunk with
DROPOUT
probability.
<a id="x-28MGL-BP-3A-40MGL-BP-GAUSSIAN-RANDOM-LUMP-20MGL-PAX-3ASECTION-29"></a>
Gaussian Random Lump
<a id="x-28MGL-BP-3A--3EGAUSSIAN-RANDOM-20CLASS-29"></a>
-
[class] ->GAUSSIAN-RANDOM LUMP
This lump has no input, it produces normally distributed independent random numbers with
MEAN
andVARIANCE
(orVARIANCE-FOR-PREDICTION
). This is useful building block for noise based regularization methods.(->gaussian-random :size 10 :name 'normal :mean 1 :variance 2) ==> #<->GAUSSIAN-RANDOM NORMAL :SIZE 10 1/1 :NORM 0.00000>
<a id="x-28MGL-BP-3AMEAN-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29"></a>
-
[accessor] MEAN ->GAUSSIAN-RANDOM (:MEAN = 0)
The mean of the normal distribution.
<a id="x-28MGL-BP-3AVARIANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29"></a>
-
[accessor] VARIANCE ->GAUSSIAN-RANDOM (:VARIANCE = 1)
The variance of the normal distribution.
<a id="x-28MGL-BP-3AVARIANCE-FOR-PREDICTION-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29"></a>
-
[accessor] VARIANCE-FOR-PREDICTION ->GAUSSIAN-RANDOM (:VARIANCE-FOR-PREDICTION = 0)
If not
NIL
, then this value overridesVARIANCE
when not in training (i.e. when making predictions).
<a id="x-28MGL-BP-3A-40MGL-BP-SAMPLE-BINARY-LUMP-20MGL-PAX-3ASECTION-29"></a>
Binary Sampling Lump
<a id="x-28MGL-BP-3A--3ESAMPLE-BINARY-20CLASS-29"></a>
-
[class] ->SAMPLE-BINARY LUMP
Treating values of its input as probabilities, sample independent binomials. Turn true into 1 and false into 0. The
SIZE
(0
1
) of this lump is determined automatically from the size of its input.(->sample-binary (->input :size 10) :name 'binarized-input) ==> #<->SAMPLE-BINARY BINARIZED-INPUT :SIZE 10 1/1 :NORM 0.00000>
<a id="x-28MGL-BP-3A-40MGL-BP-ARITHMETIC-20MGL-PAX-3ASECTION-29"></a>
11.4.8 Arithmetic
<a id="x-28MGL-BP-3A-40MGL-BP-SUM-LUMP-20MGL-PAX-3ASECTION-29"></a>
Sum Lump
<a id="x-28MGL-BP-3A--3ESUM-20CLASS-29"></a>
-
[class] ->SUM LUMP
Computes the sum of all nodes of its input per stripe. This
SIZE
(0
1
) of this lump is always 1.
<a id="x-28MGL-BP-3A-40MGL-BP-V-2AM-LUMP-20MGL-PAX-3ASECTION-29"></a>
Vector-Matrix Multiplication Lump
<a id="x-28MGL-BP-3A--3EV-2AM-20CLASS-29"></a>
-
[class] ->V*M LUMP
Perform
X * WEIGHTS
whereX
(the input) is of sizeM
andWEIGHTS
(0
1
) is a->WEIGHT
whose single stripe is taken to be of dimensionsM x N
stored in row major order.N
is the size of this lump. IfTRANSPOSE-WEIGHTS-P
thenWEIGHTS
isN x M
and `X- WEIGHTS'` is computed.
<a id="x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29"></a>
-
[reader] WEIGHTS ->V*M (:WEIGHTS)
A
->WEIGHT
lump.
<a id="x-28MGL-BP-3ATRANSPOSE-WEIGHTS-P-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29"></a>
-
[reader] TRANSPOSE-WEIGHTS-P ->V*M (:TRANSPOSE-WEIGHTS-P = NIL)
Determines whether the input is multiplied by
WEIGHTS
(0
1
) or its transpose.
<a id="x-28MGL-BP-3A-40MGL-BP--2B-LUMP-20MGL-PAX-3ASECTION-29"></a>
Elementwise Addition Lump
<a id="x-28MGL-BP-3A--3E-2B-20CLASS-29"></a>
-
[class] ->+ LUMP
Performs elementwise addition on its input lumps. The
SIZE
(0
1
) of this lump is automatically determined from the size of its inputs if there is at least one. If one of the inputs is a->WEIGHT
lump, then it is added to every stripe.(->+ (list (->input :size 10) (->weight :size 10 :name 'bias)) :name 'plus) ==> #<->+ PLUS :SIZE 10 1/1 :NORM 0.00000>
<a id="x-28MGL-BP-3A-40MGL-BP--2A-LUMP-20MGL-PAX-3ASECTION-29"></a>
Elementwise Multiplication Lump
<a id="x-28MGL-BP-3A--3E-2A-20CLASS-29"></a>
-
[class] ->* LUMP
Performs elementwise multiplication on its two input lumps. The
SIZE
(0
1
) of this lump is automatically determined from the size of its inputs. Either input can be a->WEIGHT
lump.(->* (->input :size 10) (->weight :size 10 :name 'scale) :name 'mult) ==> #<->* MULT :SIZE 10 1/1 :NORM 0.00000>
<a id="x-28MGL-BP-3A-40MGL-BP-ABS-LUMP-20MGL-PAX-3ASECTION-29"></a>
Abs Lump
<a id="x-28MGL-BP-3A--3EABS-20CLASS-29"></a>
- [class] ->ABS LUMP
<a id="x-28MGL-BP-3A-40MGL-BP-EXP-LUMP-20MGL-PAX-3ASECTION-29"></a>
Exp Lump
<a id="x-28MGL-BP-3A--3EEXP-20CLASS-29"></a>
- [class] ->EXP LUMP
<a id="x-28MGL-BP-3A-40MGL-BP-NORMALIZED-LUMP-20MGL-PAX-3ASECTION-29"></a>
Normalized Lump
<a id="x-28MGL-BP-3A--3ENORMALIZED-20CLASS-29"></a>
- [class] ->NORMALIZED LUMP
<a id="x-28MGL-BP-3A-40MGL-BP-RNN-OPERATIONS-20MGL-PAX-3ASECTION-29"></a>
11.4.9 Operations for RNN
s
<a id="x-28MGL-BP-3A-40MGL-BP-LSTM-SUBNET-20MGL-PAX-3ASECTION-29"></a>
LSTM Subnet
<a id="x-28MGL-BP-3A--3ELSTM-20CLASS-29"></a>
-
[class] ->LSTM BPN
Long-Short Term Memory subnetworks are built by the function
->LSTM
and they have many lumps hidden inside them. These lumps are packaged into a subnetwork to reduce clutter.
<a id="x-28MGL-BP-3A--3ELSTM-20FUNCTION-29"></a>
-
[function] ->LSTM INPUTS &KEY NAME CELL-INIT OUTPUT-INIT SIZE (ACTIVATION-FN '->ACTIVATION) (GATE-FN '->SIGMOID) (INPUT-FN '->TANH) (OUTPUT-FN '->TANH) (PEEPHOLES T)
Create an LSTM layer consisting of input, forget, output gates with which input, cell state and output are scaled. Lots of lumps are created, the final one representing to output of the LSTM has
NAME
. The rest of the lumps are named automatically based onNAME
. This function returns only the output lump (m
), but all created lumps are added automatically to theBPN
being built.There are many papers and tutorials on LSTMs. This version is well described in "Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling" (2014, Hasim Sak, Andrew Senior, Francoise Beaufays). Using the notation from that paper:
$$ i_t = s(W_{ix} x_t + W_{im} m_{t-1} + W_{ic} \odot c_{t-1} + b_i) $$
$$ f_t = s(W_{fx} x_t + W_{fm} m_{t-1} + W_{fc} \odot c_{t-1} + b_f) $$
$$ c_t = f_t \odot c_{t-1} + i_t \odot g(W_{cx} x_t + W_{cm} m_{t-1} + b_c) $$
$$ o_t = s(W_{ox} x_t + W_{om} m_{t-1} + W_{oc} \odot c_t + b_o) $$
$$ m_t = o_t \odot h(c_t), $$
where
i
,f
, ando
are the input, forget and output gates.c
is the cell state andm
is the actual output.Weight matrices for connections from
c
(W_ic
,W_fc
andW_oc
) are diagonal and represented by just the vector of diagonal values. These connections are only added ifPEEPHOLES
is true.A notable difference from the paper is that in addition to being a single lump,
x_t
(INPUTS
) can also be a list of lumps. Whenever some activation is to be calculated based onx_t
, it is going to be the sum of individual activations. For example,W_ix * x_t
is reallysum_j W_ijx * inputs_j
.If
CELL-INIT
is non-NIL, then it must be aCLUMP
ofSIZE
form which stands for the initial state of the value cell (c_{-1}
).CELL-INIT
beingNIL
is equivalent to the state of all zeros.ACTIVATION-FN
defaults to->ACTIVATION
(0
1
), but it can be for example->BATCH-NORMALIZED-ACTIVATION
. In general, functions like the aforementioned two with signature like (INPUTS
&KEY
NAME
SIZE
PEEPHOLES
) can be passed asACTIVATION-FN
.
<a id="x-28MGL-BP-3A-40MGL-BP-SEQ-BARRIER-LUMP-20MGL-PAX-3ASECTION-29"></a>
Sequence Barrier Lump
<a id="x-28MGL-BP-3A--3ESEQ-BARRIER-20CLASS-29"></a>
-
[class] ->SEQ-BARRIER LUMP
In an
RNN
, processing of stripes (instances in the batch) may require different number of time step so the final state for stripe 0 is in stripe 0 of some lump L at time step 7, while for stripe 1 it is in stripe 1 of sump lump L at time step 42.This lump copies the per-stripe states from different lumps into a single lump so that further processing can take place (typically when the
RNN
is embedded in another network).The
SIZE
(0
1
) of this lump is automatically set to the size of the lump returned by(FUNCALL SEQ-ELT-FN 0)
.
<a id="x-28MGL-BP-3ASEQ-ELT-FN-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESEQ-BARRIER-29-29"></a>
-
[reader] SEQ-ELT-FN ->SEQ-BARRIER (:SEQ-ELT-FN)
A function of an
INDEX
argument that returns the lump with that index in some sequence.
<a id="x-28MGL-BP-3ASEQ-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESEQ-BARRIER-29-29"></a>
-
[accessor] SEQ-INDICES ->SEQ-BARRIER
A sequence of length batch size of indices. The element at index
I
is the index to be passed toSEQ-ELT-FN
to find the lump whose stripeI
is copied to stripeI
of this this lump.
<a id="x-28MGL-BP-3A-40MGL-BP-UTILITIES-20MGL-PAX-3ASECTION-29"></a>
11.5 Utilities
<a id="x-28MGL-BP-3ARENORMALIZE-ACTIVATIONS-20FUNCTION-29"></a>
-
[function] RENORMALIZE-ACTIVATIONS ->V*M-LUMPS L2-UPPER-BOUND
If the l2 norm of the incoming weight vector of the a unit is larger than
L2-UPPER-BOUND
then renormalize it toL2-UPPER-BOUND
. The list of->V*M-LUMPS
is assumed to be eventually fed to the same lump.To use it, group the activation clumps into the same GD-OPTIMIZER and hang this function on
AFTER-UPDATE-HOOK
, that latter of which is done for youARRANGE-FOR-RENORMALIZING-ACTIVATIONS
.See "Improving neural networks by preventing co-adaptation of feature detectors (Hinton, 2012)", http://arxiv.org/pdf/1207.0580.pdf.
<a id="x-28MGL-BP-3AARRANGE-FOR-RENORMALIZING-ACTIVATIONS-20FUNCTION-29"></a>
-
[function] ARRANGE-FOR-RENORMALIZING-ACTIVATIONS BPN OPTIMIZER L2-UPPER-BOUND
By pushing a lambda to
AFTER-UPDATE-HOOK
ofOPTIMIZER
arrange for all weights beings trained byOPTIMIZER
to be renormalized (as inRENORMALIZE-ACTIVATIONS
withL2-UPPER-BOUND
).It is assumed that if the weights either belong to an activation lump or are simply added to the activations (i.e. they are biases).
<a id="x-28MGL-3A-40MGL-BM-20MGL-PAX-3ASECTION-29"></a>
12 Boltzmann Machines
<a id="x-28MGL-3A-40MGL-GP-20MGL-PAX-3ASECTION-29"></a>
13 Gaussian Processes
<a id="x-28MGL-NLP-3A-40MGL-NLP-20MGL-PAX-3ASECTION-29"></a>
14 Natural Language Processing
[in package MGL-NLP]
This in nothing more then a couple of utilities for now which may grow into a more serious toolset for NLP eventually.
<a id="x-28MGL-NLP-3AMAKE-N-GRAM-MAPPEE-20FUNCTION-29"></a>
-
[function] MAKE-N-GRAM-MAPPEE FUNCTION N
Make a function of a single argument that's suitable as the function argument to a mapper function. It calls
FUNCTION
with everyN
element.(map nil (make-n-gram-mappee #'print 3) '(a b c d e)) .. .. (A B C) .. (B C D) .. (C D E)
<a id="x-28MGL-NLP-3ABLEU-20FUNCTION-29"></a>
-
[function] BLEU CANDIDATES REFERENCES &KEY CANDIDATE-KEY REFERENCE-KEY (N 4)
Compute the BLEU score for bilingual CORPUS.
BLEU
measures how good a translation is compared to human reference translations.CANDIDATES
(keyed byCANDIDATE-KEY
) andREFERENCES
(keyed byREFERENCE-KEY
) are sequences of sentences. A sentence is a sequence of words. Words are compared withEQUAL
, and may be any kind of object (not necessarily strings).Currently there is no support for multiple reference translations.
N
determines the largest n-grams to consider.The first return value is the
BLEU
score (between 0 and 1, not as a percentage). The second value is the sum of the lengths ofCANDIDATES
divided by the sum of the lengths ofREFERENCES
(orNIL
, if the denominator is 0). The third is a list of n-gram precisions (also between 0 and 1 orNIL
), one for each element in [1..N
].This is basically a reimplementation of multi-bleu.perl.
(bleu '((1 2 3 4) (a b)) '((1 2 3 4) (1 2))) => 0.8408964 => 1 => (;; 1-gram precision: 4/6 2/3 ;; 2-gram precision: 3/4 3/4 ;; 3-gram precision: 2/2 1 ;; 4-gram precision: 1/1 1)
<a id="x-28MGL-NLP-3A-40MGL-NLP-BAG-OF-WORDS-20MGL-PAX-3ASECTION-29"></a>
14.1 Bag of Words
<a id="x-28MGL-NLP-3ABAG-OF-WORDS-ENCODER-20CLASS-29"></a>
-
[class] BAG-OF-WORDS-ENCODER
ENCODE
all features of a document with a sparse vector. Get the features of document fromMAPPER
, encode each feature withFEATURE-ENCODER
.FEATURE-ENCODER
may returnNIL
if the feature is not used. The result is a vector of encoded-feature/value conses. encoded-features are unique (underENCODED-FEATURE-TEST
) within the vector but are in no particular order.Depending on
KIND
, value is calculated in various ways:-
For
:FREQUENCY
it is the number of times the corresponding feature was found inDOCUMENT
. -
For
:BINARY
it is always 1. -
For
:NORMALIZED-FREQUENCY
and:NORMALIZED-BINARY
are like the unnormalized counterparts except that as the final step values in the assembled sparse vector are normalized to sum to 1. -
Finally,
:COMPACTED-BINARY
is like:BINARY
but the return values is not a vector of conses, but a vector of element-typeENCODED-FEATURE-TYPE
.
(let* ((feature-indexer (make-indexer (alexandria:alist-hash-table '(("I" . 3) ("me" . 2) ("mine" . 1))) 2)) (bag-of-words-encoder (make-instance 'bag-of-words-encoder :feature-encoder feature-indexer :feature-mapper (lambda (fn document) (map nil fn document)) :kind :frequency))) (encode bag-of-words-encoder '("All" "through" "day" "I" "me" "mine" "I" "me" "mine" "I" "me" "mine"))) => #((0 . 3.0d0) (1 . 3.0d0))
-
<a id="x-28MGL-NLP-3AFEATURE-ENCODER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29"></a>
- [reader] FEATURE-ENCODER BAG-OF-WORDS-ENCODER (:FEATURE-ENCODER)
<a id="x-28MGL-NLP-3AFEATURE-MAPPER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29"></a>
- [reader] FEATURE-MAPPER BAG-OF-WORDS-ENCODER (:FEATURE-MAPPER)
<a id="x-28MGL-NLP-3AENCODED-FEATURE-TEST-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29"></a>
- [reader] ENCODED-FEATURE-TEST BAG-OF-WORDS-ENCODER (:ENCODED-FEATURE-TEST = #'EQL)
<a id="x-28MGL-NLP-3AENCODED-FEATURE-TYPE-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29"></a>
- [reader] ENCODED-FEATURE-TYPE BAG-OF-WORDS-ENCODER (:ENCODED-FEATURE-TYPE = T)
<a id="x-28MGL-NLP-3ABAG-OF-WORDS-KIND-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29"></a>
-
[reader] BAG-OF-WORDS-KIND BAG-OF-WORDS-ENCODER (:KIND = :BINARY)