Home

Awesome

<a id="x-28MGL-MAT-3A-40MAT-MANUAL-20MGL-PAX-3ASECTION-29"></a>

MAT Manual

Table of Contents

[in package MGL-MAT]

<a id="x-28-22mgl-mat-22-20ASDF-2FSYSTEM-3ASYSTEM-29"></a>

1 The MGL-MAT ASDF System

<a id="x-28MGL-MAT-3A-40MAT-LINKS-20MGL-PAX-3ASECTION-29"></a>

2 Links

Here is the official repository and the HTML documentation for the latest version.

<a id="x-28MGL-MAT-3A-40MAT-INTRODUCTION-20MGL-PAX-3ASECTION-29"></a>

3 Introduction

<a id="x-28MGL-MAT-3A-40MAT-WHAT-IS-IT-20MGL-PAX-3ASECTION-29"></a>

3.1 What's MGL-MAT?

MGL-MAT is library for working with multi-dimensional arrays which supports efficient interfacing to foreign and CUDA code with automatic translations between cuda, foreign and lisp storage. BLAS and CUBLAS bindings are available.

<a id="x-28MGL-MAT-3A-40MAT-WHAT-KIND-OF-MATRICES-20MGL-PAX-3ASECTION-29"></a>

3.2 What kind of matrices are supported?

Currently only row-major single and double float matrices are supported, but it would be easy to add single and double precision complex types too. Other numeric types, such as byte and native integer, can be added too, but they are not supported by CUBLAS. There are no restrictions on the number of dimensions, and reshaping is possible. All functions operate on the visible portion of the matrix (which is subject to displacement and shaping), invisible elements are not affected.

<a id="x-28MGL-MAT-3A-40MAT-INSTALLATION-20MGL-PAX-3ASECTION-29"></a>

3.3 Where to Get it?

All dependencies are in quicklisp except for CL-CUDA that needs to be fetched from github. Just clone CL-CUDA and MGL-MAT into quicklisp/local-projects/ and you are set. MGL-MAT itself lives at github, too.

Prettier-than-markdown HTML documentation cross-linked with other libraries is available as part of PAX World.

<a id="x-28MGL-MAT-3A-40MAT-TUTORIAL-20MGL-PAX-3ASECTION-29"></a>

4 Tutorial

We are going to see how to create matrices, access their contents.

Creating matrices is just like creating lisp arrays:

(make-mat '6)
==> #<MAT 6 A #(0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0)>

(make-mat '(2 3) :ctype :float :initial-contents '((1 2 3) (4 5 6)))
==> #<MAT 2x3 AB #2A((1.0 2.0 3.0) (4.0 5.0 6.0))>

(make-mat '(2 3 4) :initial-element 1)
==> #<MAT 2x3x4 A #3A(((1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0))
-->                   ((1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)))>

The most prominent difference from lisp arrays is that MATs are always numeric and their element type (called CTYPE here) defaults to :DOUBLE.

Individual elements can be accessed or set with MREF:

(let ((m (make-mat '(2 3))))
  (setf (mref m 0 0) 1)
  (setf (mref m 0 1) (* 2 (mref m 0 0)))
  (incf (mref m 0 2) 4)
  m)
==> #<MAT 2x3 AB #2A((1.0d0 2.0d0 4.0d0) (0.0d0 0.0d0 0.0d0))>

Compared to AREF MREF is a very expensive operation and it's best used sparingly. Instead, typical code relies much more on matrix level operations:

(princ (scal! 2 (fill! 3 (make-mat 4))))
.. #<MAT 4 BF #(6.0d0 6.0d0 6.0d0 6.0d0)>
==> #<MAT 4 ABF #(6.0d0 6.0d0 6.0d0 6.0d0)>

The content of a matrix can be accessed in various representations called facets. MGL-MAT takes care of synchronizing these facets by copying data around lazily, but doing its best to share storage for facets that allow it.

Notice the ABF in the printed results. It illustrates that behind the scenes FILL! worked on the BACKING-ARRAY facet (hence the B) that's basically a 1d lisp array. SCAL! on the other hand made a foreign call to the BLAS dscal function for which it needed the FOREIGN-ARRAY facet (F). Finally, the A stands for the ARRAY facet that was created when the array was printed. All facets are up-to-date (else some of the characters would be lowercase). This is possible because these three facets actually share storage which is never the case for the CUDA-ARRAY facet. Now if we have a CUDA-capable GPU, CUDA can be enabled with WITH-CUDA*:

(with-cuda* ()
  (princ (scal! 2 (fill! 3 (make-mat 4)))))
.. #<MAT 4 C #(6.0d0 6.0d0 6.0d0 6.0d0)>
==> #<MAT 4 A #(6.0d0 6.0d0 6.0d0 6.0d0)>

Note the lonely C showing that only the CUDA-ARRAY facet was used for both FILL! and SCAL!. When WITH-CUDA* exits and destroys the CUDA context, it destroys all CUDA facets, moving their data to the ARRAY facet, so the returned MAT only has that facet.

When there is no high-level operation that does what we want, we may need to add new operations. This is usually best accomplished by accessing one of the facets directly, as in the following example:

<a id="x-28MGL-MAT-3ALOG-DET-EXAMPLE-20-28MGL-PAX-3AINCLUDE-20-28-3ASTART-20-28MGL-MAT-3ALOGDET-20FUNCTION-29-20-3AEND-20-28MGL-MAT-3A-3AEND-OF-LOGDET-EXAMPLE-20VARIABLE-29-29-20-3AHEADER-NL-20-22-60-60-60commonlisp-22-20-3AFOOTER-NL-20-22-60-60-60-22-29-29"></a>

(defun logdet (mat)
  "Logarithm of the determinant of MAT. Return -1, 1 or 0 (or
  equivalent) to correct for the sign, as the second value."
  (with-facets ((array (mat 'array :direction :input)))
    (lla:logdet array)))

Notice that LOGDET doesn't know about CUDA at all. WITH-FACETS(0 1) gives it the content of the matrix as a normal multidimensional lisp array, copying the data from the GPU or elsewhere if necessary. This allows new representations (FACETs) to be added easily and it also avoids copying if the facet is already up-to-date. Of course, adding CUDA support to LOGDET could make it more efficient.

Adding support for matrices that, for instance, live on a remote machine is thus possible with a new facet type and existing code would continue to work (albeit possibly slowly). Then one could optimize the bottleneck operations by sending commands over the network instead of copying data.

It is a bad idea to conflate resource management policy and algorithms. MGL-MAT does its best to keep them separate.

<a id="x-28MGL-MAT-3A-40MAT-BASICS-20MGL-PAX-3ASECTION-29"></a>

5 Basics

<a id="x-28MGL-MAT-3AMAT-20CLASS-29"></a>

<a id="x-28MGL-MAT-3AMAT-CTYPE-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAT-DISPLACEMENT-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAT-DIMENSIONS-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAT-DIMENSION-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAT-INITIAL-ELEMENT-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAT-SIZE-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAT-MAX-SIZE-20-28MGL-PAX-3AREADER-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3AMAKE-MAT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AARRAY-TO-MAT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAT-TO-ARRAY-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AREPLACE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMREF-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AROW-MAJOR-MREF-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAT-ROW-MAJOR-INDEX-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CTYPES-20MGL-PAX-3ASECTION-29"></a>

6 Element types

<a id="x-28MGL-MAT-3A-2ASUPPORTED-CTYPES-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3ACTYPE-20TYPE-29"></a>

<a id="x-28MGL-MAT-3A-2ADEFAULT-MAT-CTYPE-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3ACOERCE-TO-CTYPE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-PRINTING-20MGL-PAX-3ASECTION-29"></a>

7 Printing

<a id="x-28MGL-MAT-3A-2APRINT-MAT-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2APRINT-MAT-FACETS-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-SHAPING-20MGL-PAX-3ASECTION-29"></a>

8 Shaping

We are going to discuss various ways to change the visible portion and dimensions of matrices. Conceptually a matrix has an underlying non-displaced storage vector. For (MAKE-MAT 10 :DISPLACEMENT 7 :MAX-SIZE 21) this underlying vector looks like this:

displacement | visible elements  | slack
. . . . . . . 0 0 0 0 0 0 0 0 0 0 . . . .

Whenever a matrix is reshaped (or displaced to in lisp terminology), its displacement and dimensions change but the underlying vector does not.

The rules for accessing displaced matrices is the same as always: multiple readers can run in parallel, but attempts to write will result in an error if there are either readers or writers on any of the matrices that share the same underlying vector.

<a id="x-28MGL-MAT-3A-40MAT-SHAPING-COMPARISON-TO-LISP-20MGL-PAX-3ASECTION-29"></a>

8.1 Comparison to Lisp Arrays

One way to reshape and displace MAT objects is with MAKE-MAT and its DISPLACED-TO argument whose semantics are similar to that of MAKE-ARRAY in that the displacement is relative to the displacement of DISPLACED-TO.

(let* ((base (make-mat 10 :initial-element 5 :displacement 1))
       (mat (make-mat 6 :displaced-to base :displacement 2)))
  (fill! 1 mat)
  (values base mat))
==> #<MAT 1+10+0 A #(5.0d0 5.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 5.0d0
-->                  5.0d0)>
==> #<MAT 3+6+2 AB #(1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0)>

There are important semantic differences compared to lisp arrays all which follow from the fact that displacement operates on the underlying conceptual non-displaced vector.

<a id="x-28MGL-MAT-3A-40MAT-SHAPING-FUNCTIONAL-20MGL-PAX-3ASECTION-29"></a>

8.2 Functional Shaping

The following functions are collectively called the functional shaping operations, since they don't alter their arguments in any way. Still, since storage is aliased modification to the returned matrix will affect the original.

<a id="x-28MGL-MAT-3ARESHAPE-AND-DISPLACE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ARESHAPE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ADISPLACE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-SHAPING-DESTRUCTIVE-20MGL-PAX-3ASECTION-29"></a>

8.3 Destructive Shaping

The following destructive operations don't alter the contents of the matrix, but change what is visible. ADJUST! is the odd one out, it may create a new MAT.

<a id="x-28MGL-MAT-3ARESHAPE-AND-DISPLACE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ARESHAPE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ADISPLACE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ARESHAPE-TO-ROW-MATRIX-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AWITH-SHAPE-AND-DISPLACEMENT-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3AADJUST-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-ASSEMBLING-20MGL-PAX-3ASECTION-29"></a>

9 Assembling

The functions here assemble a single MAT from a number of MATs.

<a id="x-28MGL-MAT-3ASTACK-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASTACK-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CACHING-20MGL-PAX-3ASECTION-29"></a>

10 Caching

Allocating and initializing a MAT object and its necessary facets can be expensive. The following macros remember the previous value of a binding in the same thread and /place/. Only weak references are constructed so the cached objects can be garbage collected.

While the cache is global, thread safety is guaranteed by having separate subcaches per thread. Each subcache is keyed by a /place/ object that's either explicitly specified or else is unique to each invocation of the caching macro, so different occurrences of caching macros in the source never share data. Still, recursion could lead to data sharing between different invocations of the same function. To prevent this, the cached object is removed from the cache while it is used so other invocations will create a fresh one which isn't particularly efficient but at least it's safe.

<a id="x-28MGL-MAT-3AWITH-THREAD-CACHED-MAT-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3AWITH-THREAD-CACHED-MATS-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3AWITH-ONES-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-BLAS-20MGL-PAX-3ASECTION-29"></a>

11 BLAS Operations

Only some BLAS functions are implemented, but it should be easy to add more as needed. All of them default to using CUDA, if it is initialized and enabled (see USE-CUDA-P).

Level 1 BLAS operations

<a id="x-28MGL-MAT-3AASUM-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AAXPY-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACOPY-21-20FUNCTION-29"></a>

<a id="x-28CL-CUDA-2ELANG-2EBUILT-IN-3ADOT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ANRM2-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASCAL-21-20FUNCTION-29"></a>

Level 3 BLAS operations

<a id="x-28MGL-MAT-3AGEMM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-DESTRUCTIVE-API-20MGL-PAX-3ASECTION-29"></a>

12 Destructive API

<a id="x-28MGL-MAT-3A-2ESQUARE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ESQRT-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ELOG-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2EEXP-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2EEXPT-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2EINV-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ELOGISTIC-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2E-2B-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2E-2A-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AGEEM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AGEERV-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2E-3C-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2EMIN-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2EMAX-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AADD-SIGN-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AFILL-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASUM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASCALE-ROWS-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASCALE-COLUMNS-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ESIN-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ECOS-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ETAN-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ESINH-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ECOSH-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ETANH-21-20FUNCTION-29"></a>

Finally, some neural network operations.

<a id="x-28MGL-MAT-3ACONVOLVE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ADERIVE-CONVOLVE-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAX-POOL-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ADERIVE-MAX-POOL-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-NON-DESTRUCTIVE-API-20MGL-PAX-3ASECTION-29"></a>

13 Non-destructive API

<a id="x-28MGL-MAT-3ACOPY-MAT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACOPY-ROW-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACOPY-COLUMN-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAT-AS-SCALAR-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ASCALAR-AS-MAT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AM-3D-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ATRANSPOSE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AM-2A-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMM-2A-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AM--20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AM-2B-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AINVERT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ALOGDET-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-MAPPINGS-20MGL-PAX-3ASECTION-29"></a>

14 Mappings

<a id="x-28MGL-MAT-3AMAP-CONCAT-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAP-DISPLACEMENTS-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMAP-MATS-INTO-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-RANDOM-20MGL-PAX-3ASECTION-29"></a>

15 Random numbers

Unless noted these work efficiently with CUDA.

<a id="x-28MGL-MAT-3ACOPY-RANDOM-STATE-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AUNIFORM-RANDOM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AGAUSSIAN-RANDOM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AMV-GAUSSIAN-RANDOM-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AORTHOGONAL-RANDOM-21-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-IO-20MGL-PAX-3ASECTION-29"></a>

16 I/O

<a id="x-28MGL-MAT-3A-2AMAT-HEADERS-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3AWRITE-MAT-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AREAD-MAT-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-DEBUGGING-20MGL-PAX-3ASECTION-29"></a>

17 Debugging

The largest class of bugs has to do with synchronization of facets being broken. This is almost always caused by an operation that mispecifies the DIRECTION argument of WITH-FACET. For example, the matrix argument of SCAL! should be accessed with direciton :IO. But if it's :INPUT instead, then subsequent access to the ARRAY(0 1) facet will not see the changes made by AXPY!, and if it's :OUTPUT, then any changes made to the ARRAY facet since the last update of the CUDA-ARRAY facet will not be copied and from the wrong input SCAL! will compute the wrong result.

Using the SLIME inspector or trying to access the CUDA-ARRAY facet from threads other than the one in which the corresponding CUDA context was initialized will fail. For now, the easy way out is to debug the code with CUDA disabled (see *CUDA-ENABLED*).

Another thing that tends to come up is figuring out where memory is used.

<a id="x-28MGL-MAT-3AMAT-ROOM-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AWITH-MAT-COUNTERS-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-FACET-API-20MGL-PAX-3ASECTION-29"></a>

18 Facet API

<a id="x-28MGL-MAT-3A-40MAT-FACETS-20MGL-PAX-3ASECTION-29"></a>

18.1 Facets

A MAT is a CUBE (see Cube Manual) whose facets are different representations of numeric arrays. These facets can be accessed with WITH-FACETS(0 1) with one of the following FACET-NAME locatives:

<a id="x-28MGL-MAT-3ABACKING-ARRAY-20MGL-CUBE-3AFACET-NAME-29"></a>

<a id="x-28ARRAY-20MGL-CUBE-3AFACET-NAME-29"></a>

<a id="x-28MGL-MAT-3AFOREIGN-ARRAY-20MGL-CUBE-3AFACET-NAME-29"></a>

<a id="x-28MGL-MAT-3ACUDA-HOST-ARRAY-20MGL-CUBE-3AFACET-NAME-29"></a>

<a id="x-28MGL-MAT-3ACUDA-ARRAY-20MGL-CUBE-3AFACET-NAME-29"></a>

Facets bound by with WITH-FACETS(0 1) are to be treated as dynamic extent: it is not allowed to keep a reference to them beyond the dynamic scope of WITH-FACETS.

For example, to implement the FILL! operation using only the BACKING-ARRAY, one could do this:

(let ((displacement (mat-displacement x))
      (size (mat-size x)))
 (with-facets ((x* (x 'backing-array :direction :output)))
   (fill x* 1 :start displacement :end (+ displacement size))))

DIRECTION is :OUTPUT because we clobber all values in X. Armed with this knowledge about the direction, WITH-FACETS will not copy data from another facet if the backing array is not up-to-date.

To transpose a 2d matrix with the ARRAY facet:

(destructuring-bind (n-rows n-columns) (mat-dimensions x)
  (with-facets ((x* (x 'array :direction :io)))
    (dotimes (row n-rows)
      (dotimes (column n-columns)
        (setf (aref x* row column) (aref x* column row))))))

Note that DIRECTION is :IO, because we need the data in this facet to be up-to-date (that's the input part) and we are invalidating all other facets by changing values (that's the output part).

To sum the values of a matrix using the FOREIGN-ARRAY facet:

(let ((sum 0))
  (with-facets ((x* (x 'foreign-array :direction :input)))
    (let ((pointer (offset-pointer x*)))
      (loop for index below (mat-size x)
            do (incf sum (cffi:mem-aref pointer (mat-ctype x) index)))))
  sum)

See DIRECTION for a complete description of :INPUT, :OUTPUT and :IO. For MAT objects, that needs to be refined. If a MAT is reshaped and/or displaced in a way that not all elements are visible then those elements are always kept intact and copied around. This is accomplished by turning :OUTPUT into :IO automatically on such MATs.

We have finished our introduction to the various facets. It must be said though that one can do anything without ever accessing a facet directly or even being aware of them as most operations on MATs take care of choosing the most appropriate facet behind the scenes. In particular, most operations automatically use CUDA, if available and initialized. See WITH-CUDA* for detail.

<a id="x-28MGL-MAT-3A-40MAT-FOREIGN-20MGL-PAX-3ASECTION-29"></a>

18.2 Foreign arrays

One facet of MAT objects is FOREIGN-ARRAY which is backed by a memory area that can be a pinned lisp array or is allocated in foreign memory depending on *FOREIGN-ARRAY-STRATEGY*.

<a id="x-28MGL-MAT-3AFOREIGN-ARRAY-20CLASS-29"></a>

<a id="x-28MGL-MAT-3A-2AFOREIGN-ARRAY-STRATEGY-2A-20-28VARIABLE-20-22-see-20below--22-29-29"></a>

<a id="x-28MGL-MAT-3AFOREIGN-ARRAY-STRATEGY-20TYPE-29"></a>

<a id="x-28MGL-MAT-3APINNING-SUPPORTED-P-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AFOREIGN-ROOM-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CUDA-20MGL-PAX-3ASECTION-29"></a>

18.3 CUDA

<a id="x-28MGL-MAT-3ACUDA-AVAILABLE-P-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AWITH-CUDA-2A-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3ACALL-WITH-CUDA-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-2ACUDA-ENABLED-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3ACUDA-ENABLED-20-28MGL-PAX-3AACCESSOR-20MGL-MAT-3AMAT-29-29"></a>

<a id="x-28MGL-MAT-3A-2ADEFAULT-MAT-CUDA-ENABLED-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2AN-MEMCPY-HOST-TO-DEVICE-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2AN-MEMCPY-DEVICE-TO-HOST-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2ACUDA-DEFAULT-DEVICE-ID-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2ACUDA-DEFAULT-RANDOM-SEED-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-2ACUDA-DEFAULT-N-RANDOM-STATES-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CUDA-MEMORY-MANAGEMENT-20MGL-PAX-3ASECTION-29"></a>

18.3.1 CUDA Memory Management

The GPU (called device in CUDA terminology) has its own memory and it can only perform computation on data in this device memory so there is some copying involved to and from main memory. Efficient algorithms often allocate device memory up front and minimize the amount of copying that has to be done by computing as much as possible on the GPU.

MGL-MAT reduces the cost of device of memory allocations by maintaining a cache of currently unused allocations from which it first tries to satisfy allocation requests. The total size of all the allocated device memory regions (be they in use or currently unused but cached) is never more than N-POOL-BYTES as specified in WITH-CUDA*. N-POOL-BYTES being NIL means no limit.

<a id="x-28MGL-MAT-3ACUDA-OUT-OF-MEMORY-20CONDITION-29"></a>

<a id="x-28MGL-MAT-3ACUDA-ROOM-20FUNCTION-29"></a>

That's it about reducing the cost allocations. The other important performance consideration, minimizing the amount copying done, is very hard to do if the data doesn't fit in device memory which is often a very limited resource. In this case the next best thing is to do the copying concurrently with computation.

<a id="x-28MGL-MAT-3AWITH-SYNCING-CUDA-FACETS-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-2ASYNCING-CUDA-FACETS-SAFE-P-2A-20VARIABLE-29"></a>

Also note that often the easiest thing to do is to prevent the use of CUDA (and consequently the creation of CUDA-ARRAY facets, and allocations). This can be done either by binding *CUDA-ENABLED* to NIL or by setting CUDA-ENABLED to NIL on specific matrices.

<a id="x-28MGL-MAT-3A-40MAT-EXTENSIONS-20MGL-PAX-3ASECTION-29"></a>

19 Writing Extensions

New operations are usually implemented in lisp, CUDA, or by calling a foreign function in, for instance, BLAS, CUBLAS, CURAND.

<a id="x-28MGL-MAT-3A-40MAT-LISP-EXTENSIONS-20MGL-PAX-3ASECTION-29"></a>

19.1 Lisp Extensions

<a id="x-28MGL-MAT-3ADEFINE-LISP-KERNEL-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-2ADEFAULT-LISP-KERNEL-DECLARATIONS-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CUDA-EXTENSIONS-20MGL-PAX-3ASECTION-29"></a>

19.2 CUDA Extensions

<a id="x-28MGL-MAT-3AUSE-CUDA-P-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACHOOSE-1D-BLOCK-AND-GRID-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACHOOSE-2D-BLOCK-AND-GRID-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACHOOSE-3D-BLOCK-AND-GRID-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ADEFINE-CUDA-KERNEL-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CUBLAS-20MGL-PAX-3ASECTION-29"></a>

19.2.1 CUBLAS

In a WITH-CUDA* BLAS Operations will automatically use CUBLAS. No need to use these at all.

<a id="x-28MGL-MAT-3ACUBLAS-ERROR-20CONDITION-29"></a>

<a id="x-28MGL-MAT-3ACUBLAS-ERROR-FUNCTION-NAME-20-28MGL-PAX-3AREADER-20MGL-MAT-3ACUBLAS-ERROR-29-29"></a>

<a id="x-28MGL-MAT-3ACUBLAS-ERROR-STATUS-20-28MGL-PAX-3AREADER-20MGL-MAT-3ACUBLAS-ERROR-29-29"></a>

<a id="x-28MGL-MAT-3A-2ACUBLAS-HANDLE-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3ACUBLAS-CREATE-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3ACUBLAS-DESTROY-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3AWITH-CUBLAS-HANDLE-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3ACUBLAS-GET-VERSION-20FUNCTION-29"></a>

<a id="x-28MGL-MAT-3A-40MAT-CURAND-20MGL-PAX-3ASECTION-29"></a>

19.2.2 CURAND

This the low level CURAND API. You probably want Random numbers instead.

<a id="x-28MGL-MAT-3AWITH-CURAND-STATE-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-MAT-3A-2ACURAND-STATE-2A-20VARIABLE-29"></a>

<a id="x-28MGL-MAT-3ACURAND-XORWOW-STATE-20CLASS-29"></a>

<a id="x-28MGL-MAT-3AN-STATES-20-28MGL-PAX-3AREADER-20MGL-MAT-3ACURAND-XORWOW-STATE-29-29"></a>

<a id="x-28MGL-MAT-3ASTATES-20-28MGL-PAX-3AREADER-20MGL-MAT-3ACURAND-XORWOW-STATE-29-29"></a>

Cube Manual

Table of Contents

[in package MGL-CUBE]

<a id="x-28MGL-CUBE-3A-40CUBE-LINKS-20MGL-PAX-3ASECTION-29"></a>

1 Links

Here is the official repository and the HTML documentation for the latest version.

<a id="x-28MGL-CUBE-3A-40CUBE-INTRODUCTION-20MGL-PAX-3ASECTION-29"></a>

2 Introduction

This is the library on which MGL-MAT (see MAT Manual) is built. The idea of automatically translating between various representations may be useful for other applications, so this got its own package and all ties to MGL-MAT has been severed.

This package defines CUBE, an abstract base class that provides a framework for automatic conversion between various representations of the same data. To define a cube, CUBE needs to be subclassed and the Facet Extension API be implemented.

If you are only interested in how to use cubes in general, read Basics, Lifetime and Facet Barriers.

If you want to implement a new cube datatype, then see Facets, Facet Extension API, and The Default Implementation of CALL-WITH-FACET*.

<a id="x-28MGL-CUBE-3A-40CUBE-BASICS-20MGL-PAX-3ASECTION-29"></a>

3 Basics

Here we learn what a CUBE is and how to access the data in it with WITH-FACET.

<a id="x-28MGL-CUBE-3ACUBE-20CLASS-29"></a>

<a id="x-28MGL-CUBE-3AWITH-FACET-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-CUBE-3ADIRECTION-20TYPE-29"></a>

<a id="x-28MGL-CUBE-3AWITH-FACETS-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-CUBE-3A-40CUBE-SYNCHRONIZATION-20MGL-PAX-3ASECTION-29"></a>

4 Synchronization

Cubes keep track of which facets are used, which are up-to-date to be able to perform automatic translation between facets. WITH-FACET and other operations access and make changes to this metadata so thread safety is a concern. In this section, we detail how to relax the default thread safety guarantees.

A related concern is async signal safety which arises most often when C-c'ing or killing a thread or when the extremely nasty WITH-TIMEOUT macro is used. In a nutshell, changes to cube metadata are always made with interrupts disabled so things should be async signal safe.

<a id="x-28MGL-CUBE-3ASYNCHRONIZATION-20-28MGL-PAX-3AACCESSOR-20MGL-CUBE-3ACUBE-29-29"></a>

<a id="x-28MGL-CUBE-3A-2ADEFAULT-SYNCHRONIZATION-2A-20VARIABLE-29"></a>

<a id="x-28MGL-CUBE-3A-2AMAYBE-SYNCHRONIZE-CUBE-2A-20VARIABLE-29"></a>

<a id="x-28MGL-CUBE-3A-40CUBE-FACETS-20MGL-PAX-3ASECTION-29"></a>

5 Facets

The basic currency for implementing new cube types is the FACET. Simply using a cube only involves facet names and values, never facets themselves.

<a id="x-28MGL-CUBE-3AFACETS-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AFIND-FACET-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AFACET-20CLASS-29"></a>

<a id="x-28MGL-CUBE-3AFACET-NAME-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-VALUE-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-DESCRIPTION-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-UP-TO-DATE-P-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-N-WATCHERS-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-WATCHER-THREADS-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3AFACET-DIRECTION-20MGL-PAX-3ASTRUCTURE-ACCESSOR-29"></a>

<a id="x-28MGL-CUBE-3A-40CUBE-FACET-EXTENSION-API-20MGL-PAX-3ASECTION-29"></a>

6 Facet Extension API

Many of the generic functions in this section take FACET arguments. FACET is a structure and is not intended to be subclassed. To be able to add specialized methods, the name of the facet (FACET-NAME) is also passed as the argument right in front of the corresponding facet argument.

In summary, define EQL(0 1) specializers on facet name arguments, and use FACET-DESCRIPTION to associate arbitrary information with facets.

<a id="x-28MGL-CUBE-3AMAKE-FACET-2A-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ADESTROY-FACET-2A-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ACOPY-FACET-2A-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ACALL-WITH-FACET-2A-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AFACET-UP-TO-DATE-P-2A-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ASELECT-COPY-SOURCE-FOR-FACET-2A-20GENERIC-FUNCTION-29"></a>

PAX integration follows, don't worry about it if you don't use PAX, but you really should (see MGL-PAX::@PAX-MANUAL).

<a id="x-28MGL-CUBE-3AFACET-NAME-20MGL-PAX-3ALOCATIVE-29"></a>

<a id="x-28MGL-CUBE-3ADEFINE-FACET-NAME-20MGL-PAX-3AMACRO-29"></a>

Also see The Default Implementation of CALL-WITH-FACET*.

<a id="x-28MGL-CUBE-3A-40CUBE-DEFAULT-CALL-WITH-FACET-2A-20MGL-PAX-3ASECTION-29"></a>

7 The Default Implementation of CALL-WITH-FACET*

<a id="x-28MGL-CUBE-3ACALL-WITH-FACET-2A-20-28METHOD-20NIL-20-28MGL-CUBE-3ACUBE-20T-20T-20T-29-29-29"></a>

<a id="x-28MGL-CUBE-3AWATCH-FACET-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AUNWATCH-FACET-20GENERIC-FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3A-2ALET-INPUT-THROUGH-P-2A-20VARIABLE-29"></a>

<a id="x-28MGL-CUBE-3A-2ALET-OUTPUT-THROUGH-P-2A-20VARIABLE-29"></a>

<a id="x-28MGL-CUBE-3ACHECK-NO-WRITERS-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ACHECK-NO-WATCHERS-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3A-40CUBE-LIFETIME-20MGL-PAX-3ASECTION-29"></a>

8 Lifetime

Lifetime management of facets is manual (but facets of garbage cubes are freed automatically by a finalizer, see MAKE-FACET*). One may destroy a single facet or all facets of a cube with DESTROY-FACET and DESTROY-CUBE, respectively. Also see Facet Barriers.

<a id="x-28MGL-CUBE-3ADESTROY-FACET-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3ADESTROY-CUBE-20FUNCTION-29"></a>

In some cases it is useful to declare the intent to use a facet in the future to prevent its destruction. Hence, every facet has reference count which starts from 0. The reference count is incremented and decremented by ADD-FACET-REFERENCE-BY-NAME and REMOVE-FACET-REFERENCE-BY-NAME, respectively. If it is positive, then the facet will not be destroyed by explicit DESTROY-FACET and DESTROY-CUBE calls, but it will still be destroyed by the finalizer to prevent resource leaks caused by stray references.

<a id="x-28MGL-CUBE-3AADD-FACET-REFERENCE-BY-NAME-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AREMOVE-FACET-REFERENCE-BY-NAME-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3AREMOVE-FACET-REFERENCE-20FUNCTION-29"></a>

<a id="x-28MGL-CUBE-3A-40CUBE-FACET-BARRIER-20MGL-PAX-3ASECTION-29"></a>

8.1 Facet Barriers

A facility to control lifetime of facets tied to a dynamic extent. Also see Lifetime.

<a id="x-28MGL-CUBE-3AWITH-FACET-BARRIER-20MGL-PAX-3AMACRO-29"></a>

<a id="x-28MGL-CUBE-3ACOUNT-BARRED-FACETS-20FUNCTION-29"></a>


[generated by MGL-PAX]