Skip to content

Commit

Permalink
Merge 1cdeb06 into 82d3c6f
Browse files Browse the repository at this point in the history
  • Loading branch information
Jutho committed Nov 19, 2019
2 parents 82d3c6f + 1cdeb06 commit e7b78ff
Show file tree
Hide file tree
Showing 15 changed files with 712 additions and 32 deletions.
4 changes: 3 additions & 1 deletion Project.toml
Expand Up @@ -6,14 +6,16 @@ version = "1.3.1"
[deps]
LRUCache = "8ac3fa9e-de4c-5943-b1dc-09c6b5f20637"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
Strided = "5e0ebb24-38b0-5f93-81fe-25c709ecae67"
TupleTools = "9d95972d-f1c8-5527-a6e0-b4b365fa01f6"

[compat]
LRUCache = "1"
julia = "1"
Strided = "0.3.3,1"
TupleTools = "1.1"
Requires = "0.5"
julia = "1"

[extras]
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Expand Down
11 changes: 7 additions & 4 deletions README.md
Expand Up @@ -6,11 +6,14 @@ Fast tensor operations using a convenient Einstein index notation.
|:-------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|
| [![][docs-stable-img]][docs-stable-url] [![][docs-dev-img]][docs-dev-url] | [![][travis-img]][travis-url] [![][appveyor-img]][appveyor-url] [![][codecov-img]][codecov-url] [![][coveralls-img]][coveralls-url] | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3245497.svg)](https://doi.org/10.5281/zenodo.3245497) |

**TensorOperations v1.0.0 represents a significant rewrite from previous versions.**
**TensorOperations v2.0.0 represents a significant update and rewrite from previous versions.**

While the exported API was left mostly unchanged, there are a few
breaking changes, especially in the function syntax.
* Tensoroperations.jl now exports an `ncon` method, familiar in the quantum tensor network community and mostly compatible with e.g. [arXiv:1402.0939](https://arxiv.org/abs/1402.0939). Unlike the `@tensor` which has been at the heart of TensorOperations.jl, the `ncon` analyzes the network at runtime, and as a consequence has a non-inferrable output. On the other hand, this allows to use dynamical index specifications which are not known at compile time. There is also an `@ncon` macro which uses the same format and also allows for dynamical index specifications, but has the advantage that it adds a hook into the global LRU cache where temporary objects are stored and recycled.

* TensorOperations.jl now supports `CuArray` objects via the NVidia's CUTENSOR library, which is wrapped in CuArrays.jl. This requires that the latter is also loaded with `using CuArrays`. `CuArray` objects can directly be used in the existing calls and macro environments like `@tensor` and `@ncon`. However, no operation should try to mix a normal `Array` and a `CuArray`. There is also a new `@cutensor` macro which will transform all array objects to the GPU and perform the contractions and permutations there. Objects are moved to the GPU when they are first needed, so that transfer times of later objects can coincide with computation time for operations on earlier objects.

* TensorOperations.jl now has a `@notensor` macro to indicate that a block within an `@tensor` environment (or `@tensoropt` or `@cutensor`) should be left alone and contains valid Julia code that should not be transformed.

[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: https://jutho.github.io/TensorOperations.jl/latest

Expand Down Expand Up @@ -47,4 +50,4 @@ end
```
In the second to last line, the result of the operation will be stored in the preallocated array `D`, whereas the last line uses a different assignment operator `:=` in order to define and allocate a new array `E` of the correct size. The contents of `D` and `E` will be equal.

For more information, please see the docs.
For more information, please see the docs.
2 changes: 2 additions & 0 deletions docs/src/implementation.md
@@ -1,5 +1,7 @@
# Implementation

*** Warning: this section still needs to be updated for version 2.0 ***

## Index notation and the `@tensor` macro

We start by describing the implementation of the `@tensor` and `@tensoropt` macro. The
Expand Down
11 changes: 8 additions & 3 deletions docs/src/index.md
Expand Up @@ -18,14 +18,21 @@ Install with the package manager, `pkg> add TensorOperations`.
via Einstein's index notation convention. The index notation is analyzed at compile time.
* Ability to
[optimize pairwise contraction order](https://doi.org/10.1103/PhysRevE.90.033315)
using the `@tensoropt` macro.
using the `@tensoropt` macro. This optimization is performed at compile time, and the resulting contraction order is hard coded into the resulting expression. The similar macro `@tensoropt_verbose` provides more information on the optimization process.
* ***New***: a function `ncon` (for network contractor) for contracting a group of
tensors (a.k.a. a tensor network), as well as a corresponding `@ncon` macro that
simplifies and optimizes this slightly. Unlike the previous macros, `ncon` and `@ncon`
do not analyze the contractions at compile time, thus allowing them to deal with
dynamic networks or index specifications.
* Support for any Julia Base array which qualifies as strided, i.e. such that its entries
are layed out according to a regular pattern in memory. The only exception are
`ReinterpretedArray` objects (implementation provided by Strided.jl, see below).
Additionally, `Diagonal` objects whose underlying diagonal data is stored as a strided
vector are supported. This facilitates tensor contractions where one of the operands is
e.g. a diagonal matrix of singular values or eigenvalues, which are returned as a
`Vector` by Julia's `eigen` or `svd` method.
* ***New***: Support for `CuArray` objects if used together with CuArrays.jl, by relying
on (and thus providing a high level interface into) NVidia's CUTENSOR library.
* Implementation can easily be extended to other types, by overloading a small set of
methods.
* Efficient implementation of a number of basic tensor operations (see below), by relying
Expand Down Expand Up @@ -68,7 +75,5 @@ every more complicated tensor expression is deconstructed.

## To do list

* Make cache threadsafe.

* Make it easier to check contraction order and to splice in runtime information, or
optimize based on memory footprint or other custom cost functions.

0 comments on commit e7b78ff

Please sign in to comment.