Awesome
TensorCast.jl
This package lets you work with multi-dimensional arrays in index notation, by defining a few macros which translate this to broadcasting, permuting, and reducing operations.
The first is @cast
, which deals both with "casting" into new shapes (including going to and from an array-of-arrays) and with broadcasting:
@cast A[row][col] := B[row, col] # slice a matrix B into rows, also @cast A[r] := B[r,:]
@cast C[(i,j), (k,ℓ)] := D.x[i,j,k,ℓ] # reshape a 4-tensor D.x to give a matrix
@cast E[φ,γ] = F[φ]^2 * exp(G[γ]) # broadcast E .= F.^2 .* exp.(G') into existing E
@cast _[i] := isodd(i) ? log(i) : V[i] # broadcast a function of the index values
@cast T[x,y,n] := outer(M[:,n])[x,y] # generalised mapslices, vector -> matrix function
Second, @reduce
takes sums (or other reductions) over the indicated directions. Among such sums is
matrix multiplication, which can be done more efficiently using @matmul
instead:
@reduce K[_,b] := prod(a,c) L.field[a,b,c] # product over dims=(1,3), drop dims=3
@reduce S[i] = sum(n) -P[i,n] * log(P[i,n]/Q[n]) # sum!(S, @. -P*log(P/Q')) into exising S
@matmul M[i,j] := sum(k,k′) U[i,k,k′] * V[(k,k′),j] # matrix multiplication, plus reshape
The same notation with @cast
applies a function accepting the dims
keyword, without reducing:
@cast W[i,j,c,n] := cumsum(c) X[c,i,j,n]^2 # permute, broadcast, cumsum(; dims=3)
All of these are converted into array commands like reshape
and permutedims
and eachslice
, plus a broadcasting expression if needed,
and sum
/ sum!
, or *
/ mul!
. This package just provides a convenient notation.
[!WARNING] Writing
@reduce C[i,j] := sum(k) A[i,k] * B[k,j]
is terrible way to perform matrix multiplication. This creates a huge arrayA .* reshape(B, 1, size(B)...)
before summing, which is much slower than the built-inA * B
. See below for other packages which aim to be good at such operations.
From version 0.4, it relies on TransmuteDims.jl to handle re-ordering of dimensions, and LazyStack.jl to handle slices. It should also now work with OffsetArrays.jl:
using OffsetArrays
@cast R[n,c] := n^2 + rand(3)[c] (n in -5:5) # arbitrary indexing
And it can be used with some packages which modify broadcasting:
using Strided, LoopVectorization, LazyArrays
@cast @strided E[φ,γ] = F[φ]^2 * exp(G[γ]) # multi-threaded
@reduce @turbo S[i] := sum(n) -P[i,n] * log(P[i,n]) # SIMD-enhanced
@reduce @lazy M[i,j] := sum(k) U[i,k] * V[j,k] # non-materialised
It should work automatically with most array types. This includes GPU arrays such as those from CUDA.jl, whose broadcasting is executed on the device.
Installation
using Pkg; Pkg.add("TensorCast")
The current version requires Julia 1.6 or later. There are a few pages of documentation.
Elsewhere
Similar notation is also used by some other packages, although all of them use an implicit sum over repeated indices. TensorOperations.jl performs Einstein-convention contractions and traces:
@tensor A[i] := B[i,j] * C[j,k] * D[k] # matrix multiplication, A = B * C * D
@tensor D[i] := 2 * E[i] + F[i,k,k] # partial trace of F only, Dᵢ = 2Eᵢ + Σⱼ Fᵢⱼⱼ
More general contractions are allowed by OMEinsum.jl, but only one term:
@ein Z[i,j,ξ] := X[i,k,ξ] * Y[j,k,ξ] # batched matrix multiplication
Z = ein" ikξ,jkξ -> ijξ "(X,Y) # numpy-style notation
Einsum.jl and Tullio.jl allow arbitrary (element-wise) functions:
@einsum S[i] := -P[i,n] * log(P[i,n]/Q[n]) # sum over n, for each i (also with @reduce above)
@einsum G[i] := 2 * E[i] + F[i,k,k] # the sum includes everyting: Gᵢ = Σⱼ (2Eᵢ + Fᵢⱼⱼ)
@tullio Z[i,j] := abs(A[i+x, j+y] * K[x,y]) # convolution, summing over x and y
Notice that @einsum
and @tullio
sum the entire right hand side, like @reduce
does,
while @tensor
sums individual terms.
These produce very different code for actually doing what you request:
The macros @tensor
and @ein
work out a sequence of basic tensor operations (like contraction and traces),
while @einsum
and @tullio
write the necessary set of nested loops directly (plus optimisations).
This package's macros @cast
, @reduce
and @matmul
instead write everything in terms of
whole-array operations (like reshape
, permutedims
and broadcasting).
For those who speak Python, @cast
and @reduce
allow similar operations to
einshape
or
einops
(minus the cool video, but plus broadcasting).
In the tests, this file translates many examples.
Python's einsum
is closer OMEinsum.@ein
and TensorOperations.@tensor
, and this package's @matmul
.
About
This was a holiday project to learn a bit of metaprogramming, originally TensorSlice.jl
.
But it suffered a little scope creep.