Python code used in NIPS 2017 paper: Fitting Low-Rank Tensors in Constant Time, by K. Hayashi and Y. Yoshida
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


This repository contains Python code used in NIPS 2017 paper: Fitting Low-Rank Tensors in Constant Time by K. Hayashi and Y. Yoshida.

Tl;dr: estimate the low-rank approximation error of a tensor very quickly (less than 1 second).



The function approx_residual(X, ranks, k) computes the residual (approximation) error of low-rank tensor, where X is the tensor to be decomposed (numpy array), ranks is the Tucker rank (tuple), and k is the number of samples (int). Note that the length of ranks should be the same as the dimension of X (i.e., the length of X.shape).


You can simpy run python, which compares the approximation of residual error for a toy low-rank tensor as follows.

if __name__ == "__main__":

    def gen_lowrank_X(ns, ranks, sigma=0):
        C = np.random.randn(
        Us = list()
        for i in range(len(ns)):
            (n, r) = (ns[i], ranks[i])
            Us.append(np.random.randn(n * r).reshape((n, r)))

        A = st.ttm(st.dtensor(C), Us)
        A /= np.sqrt(np.mean(A ** 2))
        A += np.random.randn( * sigma
        return A
    [n, rank, k] = [200, 5, 10]
    order = 3

    ranks = [rank] * order
    ns = [n] * order
    X = gen_lowrank_X(ns, ranks, 0.01)
    print('hooi:    \t', residual(X, ranks))
    print('sampling:\t', approx_residual(X, ranks, k))
    print('randomized:\t', residual(X, ranks, method='randomized'))

First, a 200 x 200 x 200 tensor of rank (5, 5, 5) with small Gaussian noise is generated by gen_lowrank_X. Then, the residual errors computed by HOOI, the proposed method, and randomized SVD are displayed.