Awesome
tntorch - Tensor Network Learning with PyTorch
Read the Docs site: http://tntorch.readthedocs.io/
Welcome to tntorch, a PyTorch-powered modeling and learning library using tensor networks. Such networks are unique in that they use multilinear neural units (instead of non-linear activation units). Features include:
- Basic and fancy indexing of tensors, broadcasting, assignment, etc.
- Tensor decomposition and reconstruction
- Element-wise and tensor-tensor arithmetics
- Building tensors from black-box functions using cross-approximation
- Finding global maxima and minima from tensors
- Statistics and sensitivity analysis
- Optimization using autodifferentiation
- Misc. operations on tensors: stacking, unfolding, sampling, derivating, etc.
- Batch operations (work in progress)
If you use this package, please cite our paper:
@article{UBS:22,
author = {Mikhail Usvyatsov and Rafael Ballester-Ripoll and Konrad Schindler},
title = {tntorch: Tensor Network Learning with {PyTorch}},
journal = {Journal of Machine Learning Research},
year = {2022},
volume = {23},
number = {208},
pages = {1--6},
url = {http://jmlr.org/papers/v23/21-1197.html}
}
Example Use Cases
Available tensor formats include:
- CANDECOMP/PARAFAC (CP)
- Tucker (implemented as TT with increasing ranks, which has equal expressive power. Tucker factors are unconstrained matrices, unlike unitary/orthogonal matrices in some implementations.)
- Tensor train (TT)
- Hybrids: CP-Tucker, TT-Tucker, etc.
- Partial support for other decompositions such as INDSCAL, CANDELINC, DEDICOM, PARATUCK2, and custom formats
For example, the following networks both represent a 4D tensor (i.e. a real function that can take I1 x I2 x I3 x I4 possible values) in the TT and TT-Tucker formats:
<p align="center"><img src="https://github.com/rballester/tntorch/blob/main/images/tensors.jpg" width="600" title="TT-Tucker"></p>In tntorch, all tensor decompositions share the same interface. You can handle them in a transparent form, as if they were plain NumPy arrays or PyTorch tensors:
> import tntorch as tn
> t = tn.randn(32, 32, 32, 32, ranks_tt=5) # Random 4D TT tensor of shape 32 x 32 x 32 x 32 and TT-rank 5
> print(t)
4D TT tensor:
32 32 32 32
| | | |
(0) (1) (2) (3)
/ \ / \ / \ / \
1 5 5 5 1
> print(tn.mean(t))
tensor(8.0388)
> print(tn.norm(t))
tensor(9632.3726)
Decompressing tensors is easy:
> print(t.torch().shape)
torch.Size([32, 32, 32, 32])
Thanks to PyTorch's automatic differentiation, you can easily define all sorts of loss functions on tensors:
def loss(t):
return torch.norm(t[:, 0, 10:, [3, 4]].torch()) # NumPy-like "fancy indexing" for arrays
Most importantly, loss functions can be defined on compressed tensors as well:
def loss(t):
return tn.norm(t[:3, :3, :3, :3] - t[-3:, -3:, -3:, -3:])
Check out the introductory notebook for all the details on the basics.
Tutorial Notebooks
- Introduction
- Active subspaces
- ANOVA decomposition
- Boolean logic
- Classification
- Cross-approximation
- Differentiable cross-approximation
- Differentiation
- Discrete/weighted finite automata
- Exponential machines
- Main tensor formats available
- Other custom formats
- Polynomial chaos expansions
- Tensor arithmetics
- Tensor completion and regression
- Tensor decomposition
- Sensitivity analysis
- Vector field data
Installation
You can install tntorch using pip:
pip install tntorch
Alternatively, you can install from the source:
git clone https://github.com/rballester/tntorch.git
cd tntorch
pip install .
For functions that use cross-approximation, the optional package maxvolpy is required (it can be installed via pip install maxvolpy
).
Testing
We use pytest. Simply run:
cd tests/
pytest
Contributing
Pull requests are welcome!
Besides using the issue tracker, feel also free to contact me at rafael.ballester@ie.edu.