Home

Awesome

geotensor

MIT License Python 3.7 repo size GitHub stars

Motivation

<table> <tr> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_mar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_mar.jpg?size=150" width="150px;" alt="Missing at random (MAR)"/><br /><sub><b>Missing at random (MAR)</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_rmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_rmar.jpg?size=150" width="150px;" alt="Row-wise MAR"/><br /><sub><b>Row-wise MAR</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_cmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_cmar.jpg?size=150" width="150px;" alt="Column-wise MAR"/><br /><sub><b>Column-wise MAR</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_rcmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_rcmar.jpg?size=150" width="150px;" alt="(Row, column)-wise MAR"/><br /><sub><b>(Row, column)-wise MAR</b></sub></a><br /></td> </tr> </table>

Implementation

One notable thing is that unlike the complex equations in our models, our Python implementation (relies on numpy) is extremely easy to work with. Take GLTC-Geman as an example, its kernel only has few lines:

def supergradient(s_hat, lambda0, theta):
    """Supergradient of the Geman function."""
    return (lambda0 * theta / (s_hat + theta) ** 2)

def GLTC_Geman(dense_tensor, sparse_tensor, alpha, beta, rho, theta, maxiter):
    """Main function of the GLTC-Geman."""
    dim0 = sparse_tensor.ndim
    dim1, dim2, dim3 = sparse_tensor.shape
    dim = np.array([dim1, dim2, dim3])
    binary_tensor = np.zeros((dim1, dim2, dim3))
    binary_tensor[np.where(sparse_tensor != 0)] = 1
    tensor_hat = sparse_tensor.copy()
    
    X = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{X}} (n1*n2*3*d)
    Z = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{Z}} (n1*n2*3*d)
    T = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{T}} (n1*n2*3*d)
    for k in range(dim0):
        X[:, :, :, k] = tensor_hat
        Z[:, :, :, k] = tensor_hat
    
    D1 = np.zeros((dim1 - 1, dim1)) # (n1-1)-by-n1 adjacent smoothness matrix
    for i in range(dim1 - 1):
        D1[i, i] = -1
        D1[i, i + 1] = 1
    D2 = np.zeros((dim2 - 1, dim2)) # (n2-1)-by-n2 adjacent smoothness matrix
    for i in range(dim2 - 1):
        D2[i, i] = -1
        D2[i, i + 1] = 1
        
    w = []
    for k in range(dim0):
        u, s, v = np.linalg.svd(ten2mat(Z[:, :, :, k], k), full_matrices = 0)
        w.append(np.zeros(len(s)))
        for i in range(len(np.where(s > 0)[0])):
            w[k][i] = supergradient(s[i], alpha, theta)

    for iters in range(maxiter):
        for k in range(dim0):
            u, s, v = np.linalg.svd(ten2mat(X[:, :, :, k] + T[:, :, :, k] / rho, k), full_matrices = 0)
            for i in range(len(np.where(w[k] > 0)[0])):
                s[i] = max(s[i] - w[k][i] / rho, 0)
            Z[:, :, :, k] = mat2ten(np.matmul(np.matmul(u, np.diag(s)), v), dim, k)
            var = ten2mat(rho * Z[:, :, :, k] - T[:, :, :, k], k)
            if k == 0:
                var0 = mat2ten(np.matmul(inv(beta * np.matmul(D1.T, D1) + rho * np.eye(dim1)), var), dim, k)
            elif k == 1:
                var0 = mat2ten(np.matmul(inv(beta * np.matmul(D2.T, D2) + rho * np.eye(dim2)), var), dim, k)
            else:
                var0 = Z[:, :, :, k] - T[:, :, :, k] / rho
            X[:, :, :, k] = np.multiply(1 - binary_tensor, var0) + sparse_tensor
            
            uz, sz, vz = np.linalg.svd(ten2mat(Z[:, :, :, k], k), full_matrices = 0)
            for i in range(len(np.where(sz > 0)[0])):
                w[k][i] = supergradient(sz[i], alpha, theta)
        tensor_hat = np.mean(X, axis = 3)
        for k in range(dim0):
            T[:, :, :, k] = T[:, :, :, k] + rho * (X[:, :, :, k] - Z[:, :, :, k])
            X[:, :, :, k] = tensor_hat.copy()

    return tensor_hat

Have fun if you work with our code!

<table> <tr> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_mar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_mar.jpg?size=150" width="150px;" alt="Missing at random (MAR)"/><br /><sub><b>Missing at random (MAR)</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_rmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_rmar.jpg?size=150" width="150px;" alt="Row-wise MAR"/><br /><sub><b>Row-wise MAR</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_cmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_cmar.jpg?size=150" width="150px;" alt="Column-wise MAR"/><br /><sub><b>Column-wise MAR</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/lena_rcmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/lena_rcmar.jpg?size=150" width="150px;" alt="(Row, column)-wise MAR"/><br /><sub><b>(Row, column)-wise MAR</b></sub></a><br /></td> </tr> <tr> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_mar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_mar.jpg?size=150" width="150px;" alt="RSE = 6.74%"/><br /><sub><b>RSE = 6.74%</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_rmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_rmar.jpg?size=150" width="150px;" alt="RSE = 8.20%"/><br /><sub><b>RSE = 8.20%</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_cmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_cmar.jpg?size=150" width="150px;" alt="RSE = 10.80%"/><br /><sub><b>RSE = 10.80%</b></sub></a><br /></td> <td align="center"><a href="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_rcmar.jpg"><img src="https://github.com/xinychen/geotensor/blob/master/data/GLTC_Geman_lena_rcmar.jpg?size=150" width="150px;" alt="RSE = 8.38%"/><br /><sub><b>RSE = 8.38%</b></sub></a><br /></td> </tr> </table>

Reference

NoTitleYearPDFCode
1Tensor Completion for Estimating Missing Values in Visual Data2013TPAMI-
2Efficient tensor completion for color image and video recovery: Low-rank tensor train2016arxiv-
3Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization2016CVPRMatlab
4Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks2017NeurIPSPython
5Efficient Low Rank Tensor Ring Completion2017ICCVMatlab
6Spatio-Temporal Signal Recovery Based on Low Rank and Differential Smoothness2018IEEE-
7Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements2018IJCAIMatlab
8Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm2018TPAMIMatlab
NoTitleYearPDFCode
1A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems2013ICML-
2Fast Randomized Singular Value Thresholding for Nuclear Norm Minimization2015CVPR-
3A Fast Implementation of Singular Value Thresholding Algorithm using Recycling Rank Revealing Randomized Singular Value Decomposition2017arxiv-
4Fast Randomized Singular Value Thresholding for Low-rank Optimization2018TPAMI-
5Fast Parallel Randomized QR with Column Pivoting Algorithms for Reliable Low-rank Matrix Approximations2018arxiv-
6Low-Rank Matrix Approximations with Flip-Flop Spectrum-Revealing QR Factorization2018arxiv-
NoTitleYearPDFCode
1Accelerated Proximal Gradient Methods for Nonconvex Programming2015NIPSSupp
2Incorporating Nesterov Momentum into Adam2016ICLR-
NoTitleYearPDFCode
1Differentiable Linearized ADMM2019ICML-
2Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization2019ICML-
NoTitleYearPDFCode
1Math Lecture 671: Tensor Train decomposition methods2016slide-
2Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning2016slide-
NoTitleYearPDFCode
1Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization2013TPAMI-
2Generalized noncon-vex nonsmooth low-rank minimization2014CVPRMatlab
3Generalized Singular Value Thresholding2015AAAI-
4Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications2016TPAMI-
5Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems2016arxiv-
6Scalable Tensor Completion with Nonconvex Regularization2018arxiv-
7Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers2018TPAMI-
8Nonconvex Robust Low-rank Matrix Recovery2018arxivMatlab
9Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm2019arxivMatlab
10Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations2019ICMLMatlab
11Guaranteed Matrix Completion under Multiple Linear Transformations2019CVPR-
NoTitleYearPDFCode
1A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems2013ICML-
2Rank Minimization with Structured Data Patterns2014ECCV-
3Minimizing the Maximal Rank2016CVPR-
4Convex Low Rank Approximation2016IJCV-
5Non-Convex Rank/Sparsity Regularization and Local Minima2017ICCV, Supp-
6A Non-Convex Relaxation for Fixed-Rank Approximation2017ICCV-
7Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization2018AAAI-
8Non-Convex Relaxations for Rank Regularization2019slide-
9Geometry and Regularization in Nonconvex Low-Rank Estimation2019slide-
10Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers2018IEEE TPAMI-
NoTitleYearPDFCode
1Weighted Nuclear Norm Minimization with Application to Image Denoising2014CVPRMatlab
2A Nonconvex Relaxation Approach for Rank Minimization Problems2015AAAI-
3Multi-Scale Weighted Nuclear Norm Image Restoration2018CVPRMatlab
4On the Optimal Solution of Weighted Nuclear Norm Minimization-PDF-

Collaborators

<table> <tr> <td align="center"><a href="https://github.com/xinychen"><img src="https://github.com/xinychen.png?size=80" width="80px;" alt="Xinyu Chen"/><br /><sub><b>Xinyu Chen</b></sub></a><br /><a href="https://github.com/xinychen/geotensor/commits?author=xinychen" title="Code">💻</a></td> <td align="center"><a href="https://github.com/Vadermit"><img src="https://github.com/Vadermit.png?size=80" width="80px;" alt="Jinming Yang"/><br /><sub><b>Jinming Yang</b></sub></a><br /><a href="https://github.com/xinychen/geotensor/commits?author=Vadermit" title="Code">💻</a></td> <td align="center"><a href="https://github.com/lijunsun"><img src="https://github.com/lijunsun.png?size=80" width="80px;" alt="Lijun Sun"/><br /><sub><b>Lijun Sun</b></sub></a><br /><a href="https://github.com/xinychen/geotensor/commits?author=lijunsun" title="Code">💻</a></td> </tr> </table>

See the list of contributors who participated in this project.

License

This work is released under the MIT license.