Skip to content

Commit

Permalink
Fix paper citations and typography errors
Browse files Browse the repository at this point in the history
  • Loading branch information
dsuess committed Dec 14, 2017
1 parent 5a5b409 commit c9c0d59
Show file tree
Hide file tree
Showing 2 changed files with 132 additions and 150 deletions.
11 changes: 6 additions & 5 deletions paper/paper.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: 'mpnum: A matrix product representation library for python'
title: 'mpnum: A matrix product representation library for Python'
tags:
- matrix-product
- tensor-train
Expand All @@ -25,11 +25,12 @@ bibliography: references.bib
Tensors -- or high-dimensional arrays -- are ubiquitous in science and provide the foundation for numerous numerical algorithms in scientific computing, machine learning, signal processing, and other fields.
With their high demands in memory and computational time, tensor computations constitute the bottleneck of many such algorithms.
This has led to the development of sparse and low-rank tensor decompositions [@Decompositions].
One such decomposition, which was first developed under the name _"matrix product state"_ (MPS) in the study of entanglement in quantum physics[@Werner], is the _matrix product_ or _tensor train_ (TT) representation [@Schollwoeck,@Osedelets].
One such decomposition, which was first developed under the name _"matrix product state"_ (MPS) in the study of entanglement in quantum physics [@Werner], is the _matrix product_ or _tensor train_ (TT) representation [@Schollwoeck; @Oseledets].

The matrix product tensor format is often used in practice (see e.g. [@Latorre,@NMR,@QuantumChemistry,@Uncertainty,@NeuralNetworks,@Stoudenmire]) for two reasons:
On the one hand, it captures the low-dimensional structure of many problems well. Therefore, it can be used model those problems computationally in an efficient way.
On the other hand, the matrix product tensor format also allows for performing crucial tensor operations -- such as addition, contraction, or low-rank approximation -- efficiently [@Schollwoeck,@Osedelets,@Orus,@Dance].
The matrix product tensor format is often used in practice [@Latorre;@NMR;@QuantumChemistry;@Uncertainty;@NeuralNetworks;@Stoudenmire] for two reasons:
On the one hand, it captures the low-dimensional structure of many problems well.
Therefore, it can be used model those problems computationally in an efficient way.
On the other hand, the matrix product tensor format also allows for performing crucial tensor operations -- such as addition, contraction, or low-rank approximation -- efficiently [@Schollwoeck;@Oseledets;@Orus;@Dance].

The library **mpnum** [@mpnum] provides a flexible, user-friendly, and expandable toolbox for prototyping algorithms based on the matrix-product tensor format.
Its fundamental data structure is the `MPArray` which represents a tensor with an arbitrary number of dimensions and local structure.
Expand Down
271 changes: 126 additions & 145 deletions paper/references.bib
Original file line number Diff line number Diff line change
@@ -1,183 +1,164 @@
@article{Decompositions,
title = {Tensor Decompositions and Applications},
volume = {51},
issn = {0036-1445},
url = {http://epubs.siam.org/doi/abs/10.1137/07070111X},
doi = {10.1137/07070111X},
abstract = {This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N-way array. Decompositions of higher-order tensors (i.e., N-way arrays with \$N {\textbackslash}geq 3\$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: {CANDECOMP}/{PARAFAC} ({CP}) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including {INDSCAL}, {PARAFAC}2, {CANDELINC}, {DEDICOM}, and {PARATUCK}2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.},
pages = {455--500},
number = {3},
journaltitle = {{SIAM} Review},
shortjournal = {{SIAM} Rev.},
author = {Kolda, T. and Bader, B.},
urldate = {2017-08-21},
date = {2009-08-05},
title = {Tensor Decompositions and Applications},
volume = {51},
issn = {0036-1445},
url = {http://epubs.siam.org/doi/abs/10.1137/07070111X},
doi = {10.1137/07070111X},
pages = {455--500},
number = {3},
journaltitle = {{SIAM} Review},
shortjournal = {{SIAM} Rev.},
author = {Kolda, T. and Bader, B.},
year = {2009}
}

@article{Schollwoeck,
title = {The density-matrix renormalization group in the age of matrix product states},
volume = {326},
issn = {0003-4916},
url = {http://www.sciencedirect.com/science/article/pii/S0003491610001752},
doi = {10.1016/j.aop.2010.09.012},
series = {January 2011 Special Issue},
abstract = {The density-matrix renormalization group method ({DMRG}) has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. In the further development of the method, the realization that {DMRG} operates on a highly interesting class of quantum states, so-called matrix product states ({MPS}), has allowed a much deeper understanding of the inner structure of the {DMRG} method, its further potential and its limitations. In this paper, I want to give a detailed exposition of current {DMRG} thinking in the {MPS} language in order to make the advisable implementation of the family of {DMRG} algorithms in exclusively {MPS} terms transparent. I then move on to discuss some directions of potentially fruitful further algorithmic development: while {DMRG} is a very mature method by now, I still see potential for further improvements, as exemplified by a number of recently introduced algorithms.},
pages = {96--192},
number = {1},
journaltitle = {Annals of Physics},
shortjournal = {Annals of Physics},
author = {Schollwöck, Ulrich},
urldate = {2014-10-20},
date = {2011-01},
title = {The density-matrix renormalization group in the age of matrix product states},
volume = {326},
issn = {0003-4916},
url = {http://www.sciencedirect.com/science/article/pii/S0003491610001752},
doi = {10.1016/j.aop.2010.09.012},
series = {January 2011 Special Issue},
pages = {96--192},
number = {1},
journaltitle = {Annals of Physics},
shortjournal = {Annals of Physics},
author = {Schollwöck, U.},
year = {2011},
}

@article{Oseledets,
title = {Tensor-Train Decomposition},
volume = {33},
issn = {1064-8275},
url = {http://epubs.siam.org/doi/abs/10.1137/090752286},
doi = {10.1137/090752286},
abstract = {A simple nonrecursive form of the tensor decomposition in d dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.},
pages = {2295--2317},
number = {5},
journaltitle = {{SIAM} Journal on Scientific Computing},
shortjournal = {{SIAM} J. Sci. Comput.},
author = {Oseledets, I.},
urldate = {2017-08-21},
date = {2011-01-01},
title = {Tensor-Train Decomposition},
volume = {33},
issn = {1064-8275},
url = {http://epubs.siam.org/doi/abs/10.1137/090752286},
doi = {10.1137/090752286},
pages = {2295--2317},
number = {5},
journaltitle = {{SIAM} Journal on Scientific Computing},
shortjournal = {{SIAM} J. Sci. Comput.},
author = {Oseledets, I.},
year = {2011}
}

@article{Dance,
title = {Hand-waving and interpretive dance: an introductory course on tensor networks},
volume = {50},
issn = {1751-8121},
url = {http://stacks.iop.org/1751-8121/50/i=22/a=223001},
doi = {10.1088/1751-8121/aa6dc3},
shorttitle = {Hand-waving and interpretive dance},
abstract = {The curse of dimensionality associated with the Hilbert space of spin systems provides a significant obstruction to the study of condensed matter systems. Tensor networks have proven an important tool in attempting to overcome this difficulty in both the numerical and analytic regimes. These notes form the basis for a seven lecture course, introducing the basics of a range of common tensor networks and algorithms. In particular, we cover: introductory tensor network notation, applications to quantum information, basic properties of matrix product states, a classification of quantum phases using tensor networks, algorithms for finding matrix product states, basic properties of projected entangled pair states, and multiscale entanglement renormalisation ansatz states. The lectures are intended to be generally accessible, although the relevance of many of the examples may be lost on students without a background in many-body physics/quantum information. For each lecture, several problems are given, with worked solutions in an ancillary file.},
pages = {223001},
number = {22},
journaltitle = {Journal of Physics A: Mathematical and Theoretical},
shortjournal = {J. Phys. A: Math. Theor.},
author = {Bridgeman, Jacob C. and Chubb, Christopher T.},
urldate = {2017-08-21},
date = {2017},
langid = {english},
keywords = {Condensed Matter - Statistical Mechanics, Condensed Matter - Strongly Correlated Electrons, High Energy Physics - Theory, Quantum Physics},
title = {Hand-waving and interpretive dance: an introductory course on tensor networks},
volume = {50},
issn = {1751-8121},
url = {http://stacks.iop.org/1751-8121/50/i=22/a=223001},
doi = {10.1088/1751-8121/aa6dc3},
shorttitle = {Hand-waving and interpretive dance},
pages = {223001},
number = {22},
journaltitle = {Journal of Physics A: Mathematical and Theoretical},
shortjournal = {J. Phys. A: Math. Theor.},
author = {Bridgeman, J. C. and Chubb, C. T.},
year = 2017,
}

@article{Orus,
title = {A practical introduction to tensor networks: Matrix product states and projected entangled pair states},
volume = {349},
issn = {0003-4916},
url = {http://www.sciencedirect.com/science/article/pii/S0003491614001596},
doi = {10.1016/j.aop.2014.06.013},
shorttitle = {A practical introduction to tensor networks},
abstract = {This is a partly non-technical introduction to selected topics on tensor network methods, based on several lectures and introductory seminars given on the subject. It should be a good place for newcomers to get familiarized with some of the key ideas in the field, specially regarding the numerics. After a very general introduction we motivate the concept of tensor network and provide several examples. We then move on to explain some basics about Matrix Product States ({MPS}) and Projected Entangled Pair States ({PEPS}). Selected details on some of the associated numerical methods for 1 d and 2 d quantum lattice systems are also discussed.},
pages = {117--158},
journaltitle = {Annals of Physics},
shortjournal = {Annals of Physics},
author = {Orús, Román},
urldate = {2014-10-06},
date = {2014-10},
keywords = {Condensed Matter - Strongly Correlated Electrons, Entanglement, High Energy Physics - Lattice, High Energy Physics - Theory, {MPS}, {PEPS}, Quantum Physics, Tensor networks},
title = {A practical introduction to tensor networks: Matrix product states and projected entangled pair states},
volume = {349},
issn = {0003-4916},
url = {http://www.sciencedirect.com/science/article/pii/S0003491614001596},
doi = {10.1016/j.aop.2014.06.013},
shorttitle = {A practical introduction to tensor networks},
pages = {117--158},
journaltitle = {Annals of Physics},
shortjournal = {Annals of Physics},
author = {Orús, R.},
year={2014}
}

@article{Werner,
title = {Finitely correlated states on quantum spin chains},
volume = {144},
issn = {0010-3616, 1432-0916},
url = {http://link.springer.com/article/10.1007/BF02099178},
doi = {10.1007/BF02099178},
abstract = {We study a construction that yields a class of translation invariant states on quantum spin chains, characterized by the property that the correlations across any bond can be modeled on a finite-dimensional vector space. These states can be considered as generalized valence bond states, and they are dense in the set of all translation invariant states. We develop a complete theory of the ergodic decomposition of such states, including the decomposition into periodic “Néel ordered” states. The ergodic components have exponential decay of correlations. All states considered can be obtained as “local functions” of states of a special kind, so-called “purely generated states,” which are shown to be ground states for suitably chosen finite range {VBS} interactions. We show that all these generalized {VBS} models have a spectral gap. Our theory does not require symmetry of the state with respect to a local gauge group. In particular we illustrate our results with a one-parameter family of examples which are not isotropic except for one special case. This isotropic model coincides with the one-dimensional antiferromagnet, recently studied by Affleck, Kennedy, Lieb, and Tasaki.},
pages = {443--490},
number = {3},
journaltitle = {Communications in Mathematical Physics},
shortjournal = {Commun.Math. Phys.},
author = {Fannes, M. and Nachtergaele, B. and Werner, R. F.},
urldate = {2014-08-18},
date = {1992-03-01},
langid = {english},
keywords = {Mathematical and Computational Physics, Nonlinear Dynamics, Complex Systems, Chaos, Neural Networks, Quantum Computing, Information and Physics, Quantum Physics, Relativity and Cosmology, Statistical Physics},
title = {Finitely correlated states on quantum spin chains},
volume = {144},
issn = {0010-3616, 1432-0916},
url = {http://link.springer.com/article/10.1007/BF02099178},
doi = {10.1007/BF02099178},
pages = {443--490},
number = {3},
journaltitle = {Communications in Mathematical Physics},
shortjournal = {Commun.Math. Phys.},
author = {Fannes, M. and Nachtergaele, B. and Werner, R. F.},
year = {1992},
}

@software{mpnum,
title = {mpnum: Matrix Product Representation library for Python},
rights = {{BSD}-3-Clause},
url = {https://github.com/dseuss/mpnum},
shorttitle = {mpnum},
author = {Suess, Daniel and Holzaepfel, Milan},
urldate = {2017-08-10},
date = {2017-08-10},
note = {original-date: 2016-03-09T15:44:58Z},
keywords = {dmrg, matrix-product, tensor-train},
title = {mpnum: Matrix Product Representation library for Python},
rights = {{BSD}-3-Clause},
url = {https://github.com/dseuss/mpnum},
shorttitle = {mpnum},
author = {Suess, D. and Holzaepfel, M.},
year = {2017},
}

@article{Latorre,
title={Image compression and entanglement},
author={Latorre, Jose I},
journal={arXiv preprint quant-ph/0510031},
year={2005}
title={Image compression and entanglement},
author={Latorre, J.I.},
url={https://arxiv.org/abs/quant-ph/0510031},
journal={arXiv preprint quant-ph/0510031},
year={2005}
}

@article{NMR,
title = {Exact NMR simulation of protein-size spin systems using tensor train formalism},
author = {Savostyanov, D. V. and Dolgov, S. V. and Werner, J. M. and Kuprov, Ilya},
journal = {Phys. Rev. B},
volume = {90},
issue = {8},
pages = {085139},
numpages = {8},
year = {2014},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevB.90.085139},
url = {https://link.aps.org/doi/10.1103/PhysRevB.90.085139}
title = {Exact NMR simulation of protein-size spin systems using tensor train formalism},
author = {Savostyanov, D.V. and Dolgov, S. V. and Werner, J. M. and Kuprov, I.},
journal = {Phys. Rev. B},
volume = {90},
issue = {8},
pages = {085139},
numpages = {8},
year = {2014},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevB.90.085139},
url = {https://link.aps.org/doi/10.1103/PhysRevB.90.085139}
}

@article {QuantumChemistry,
author = {Szalay, Szilárd and Pfeffer, Max and Murg, Valentin and Barcza, Gergely and Verstraete, Frank and Schneider, Reinhold and Legeza, Örs},
title = {Tensor product methods and entanglement optimization for ab initio quantum chemistry},
journal = {International Journal of Quantum Chemistry},
volume = {115},
number = {19},
issn = {1097-461X},
url = {http://dx.doi.org/10.1002/qua.24898},
doi = {10.1002/qua.24898},
pages = {1342--1391},
keywords = {tensor networks, DMRG, entanglement, tensor product approximation, quantum infromation},
year = {2015},
author = {Szalay, S. and Pfeffer, M. and Murg, V. and Barcza, G. and Verstraete, F. and Schneider, R. and Legeza, Ö.},
title = {Tensor product methods and entanglement optimization for ab initio quantum chemistry},
journal = {International Journal of Quantum Chemistry},
volume = {115},
number = {19},
issn = {1097-461X},
url = {http://dx.doi.org/10.1002/qua.24898},
doi = {10.1002/qua.24898},
pages = {1342--1391},
keywords = {tensor networks, DMRG, entanglement, tensor product approximation, quantum infromation},
year = {2015},
}

@article{Uncertainty,
title={Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition},
author={Zhang, Zheng and Yang, Xiu and Oseledets, Ivan V and Karniadakis, George E and Daniel, Luca},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
volume={34},
number={1},
pages={63--76},
year={2015},
publisher={IEEE},
doi={10.1109/TCAD.2014.2369505}
title={Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition},
author={Zhang, Z. and Yang, X. and Oseledets, I. and Karniadakis, G.E. and Daniel, L.},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
volume={34},
number={1},
pages={63--76},
year={2015},
publisher={IEEE},
doi={10.1109/TCAD.2014.2369505}
}

@incollection{NeuralNetworks,
title = {Tensorizing Neural Networks},
author = {Novikov, Alexander and Podoprikhin, Dmitrii and Osokin, Anton and Vetrov, Dmitry P},
booktitle = {Advances in Neural Information Processing Systems 28},
editor = {C. Cortes and N. D. Lawrence and D. D. Lee and M. Sugiyama and R. Garnett},
pages = {442--450},
year = {2015},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/5787-tensorizing-neural-networks.pdf}
title = {Tensorizing Neural Networks},
author = {Novikov, A. and Podoprikhin, D. and Osokin, A. and Vetrov, D. P.},
booktitle = {Advances in Neural Information Processing Systems 28},
editor = {C. Cortes and N. D. Lawrence and D. D. Lee and M. Sugiyama and R. Garnett},
pages = {442--450},
year = {2015},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/5787-tensorizing-neural-networks.pdf}
}

@incollection{Stoudenmire,
title = {Supervised Learning with Tensor Networks},
author = {Stoudenmire, Edwin and Schwab, David J.},
booktitle = {Advances in Neural Information Processing Systems 29},
pages = {4799},
year = {2016},
publisher = {Curran Associates, Inc.},
url = {https://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks}
title = {Supervised Learning with Tensor Networks},
author = {Stoudenmire, E. and Schwab, D. J.},
booktitle = {Advances in Neural Information Processing Systems 29},
pages = {4799},
year = {2016},
publisher = {Curran Associates, Inc.},
url = {https://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks}
}

0 comments on commit c9c0d59

Please sign in to comment.