Skip to content

Linear algebra functions

Roman edited this page Jun 11, 2020 · 6 revisions

Linear algebra module

Fastor has a built-in linear algebra module for vectors (1D tensors), matrices (2D) tensors and multi-dimensional tensors. You do not need to link to an additional library like BLAS or LAPACK to access these features. For most linear algebra functions two variants are provided, for instance matmul and the operator % both perform matrix multiplication, determinant and det both compute the determinant of a matrix. For the most part there is no difference between these. The more verbose variants evaluate the expression immediately while the less verbose ones delay the evaluation. Both variants are provided for convenience.

Matrix-matrix multiplication [matmul, %]

Tensor<double,M,K> A;
Tensor<double,K,N> B;
Tensor<double,M,N> C = matmul(A,B);
Tensor<double,M,N> D = A % B;

Matrix-vector multiplication [matmul, %]

Tensor<double,M,K> A;
Tensor<double,K> b;
Tensor<double,M> c = matmul(A,b);
Tensor<double,M> d = A % b;

Vector-matrix multiplication [matmul, %]

Tensor<double,K> a;
Tensor<double,K,N> B;
Tensor<double,N> c = matmul(a,B);
Tensor<double,N> d = a % B;

Triangular matrix-matrix multiplication [tmatmul]

Multiply two matrices when either one or both of them are triangular. They can be lower or upper triangular matrices. The matrices need not be square and trapazoidal cases are covered as well

Tensor<double,M,K> A;
Tensor<double,K,N> B;
Tensor<double,M,N> C1 = tmatmul<UpLoType::Lower,UpLoType::General>(A,B);
Tensor<double,M,N> C2 = tmatmul<UpLoType::General,UpLoType::Upper>(A,B);
Tensor<double,M,N> C3 = tmatmul<UpLoType::Lower,UpLoType::Upper>(A,B);
.
.
.

For moderate sized matrices you should expect nearly 2X performance when only one operand is triangular and 4X performance when both operands are triangular. If both matrices are of General type this falls back to matmul

Norm [norm]

Compute the L2 norm of a tensor. For high order tensors it returns the Frobenius norm

Tensor<double,M,N> A;
double value = norm(A);

Inner Product [inner]

The inner product of two same size tensors. Works for any order tensors

Tensor<double,K,M,N> A,B;
double value = inner(A,B);

Outer Product [outer]

The outer product of two tensors that results in an isomorphic product. Works for any order tensors

Tensor<double,K,M> A;
Tensor<double,N,P> B;
Tensor<double,K,M,N,P> C = outer(A,B);

Cross Product [cross]

The cross product of two tensors. For vectors (1D tensors) it returns the classic cross product of the two vectors.

Tensor<double,3> a, b;
Tensor<double,3> c = cross(a,b);

For higher order tensors it returns the tensor cross product of the two tensors which is the generalisation of cross products for high order tensors, see Bonet et. al.

Tensor<double,3,3> A, B;
Tensor<double,3,3> C = cross(A,B);

Determinant [determinant, det]

Compute the determinant of a tensor. Works for any order tensors.

Tensor<double,M,M> A;
double value = determinant(A);
double value = det(A);

Options can be passed to request the type of determinant computation

Tensor<double,M,M> A;
double det1 = det<DetCompType::Simple>(A); // [Default] using hand-optimised calculation for small tensors
double det2 = det<DetCompType::QR>(A);     // Using QR factorisation

For higher order tensors it will return the determinant of matrices of the last two dimensions for instance

Tensor<double,K,M,M> A;
Tensor<double,K> values = determinant(A);

Absolute Determinant [absdet]

Compute the absolute valued determinant of a tensor.

Tensor<double,M,M> A;
double value = absdet(A);

Options can be passed to request the type of determinant computation

Tensor<double,M,M> A;
double det1 = absdet<DetCompType::Simple>(A); // [Default] using hand-optimised calculation for small tensors
double det2 = absdet<DetCompType::QR>(A);     // Using QR factorisation

Logarithmic Determinant [logdet]

Compute the natural logarithm of the determinant of a tensor.

Tensor<double,M,M> A;
double value = logdet(A);

Options can be passed to request the type of determinant computation

Tensor<double,M,M> A;
double det1 = logdet<DetCompType::Simple>(A); // [Default] using hand-optimised calculation for small tensors
double det2 = logdet<DetCompType::QR>(A);     // Using QR factorisation

Transpose [transpose, trans]

Compute the transpose of a tensor. Works for any order tensors

Tensor<double,M,N> A;
Tensor<double,N,M> B = transpose(A);
Tensor<double,N,M> C = trans(A);

For higher order tensors it will return the transpose of matrices of the last two dimensions for instance

Tensor<double,K,M,N> A;
Tensor<double,K,N,M> B = transpose(A);

Conjugate Transpose [ctranspose, ctrans]

Compute the conjugate transpose of a complex valued tensor. Works for any order tensors

Tensor<double,M,N> A;
Tensor<double,N,M> B = ctranspose(A);
Tensor<double,N,M> C = ctrans(A);

For higher order tensors it will return the conjugate transpose of matrices of the last two dimensions for instance

Tensor<double,K,M,N> A;
Tensor<double,K,N,M> B = ctranspose(A);

Adjoint/Adjugate [adjoint, adj]

Compute the adjoint or rather what is called the adjugate of a 2D tensor. Note that this does not compute the conjugate transpose of a complex-valued 2D tensor [use ctranspose or ctrans for such cases]. Works for any order tensor

Tensor<double,M,M> A;
Tensor<double,M,M> B = adjoint(A);
Tensor<double,M,M> C = adj(A);

For higher order tensors it will return the adjoint of matrices of the last two dimensions for instance

Tensor<double,K,M,M> A;
Tensor<double,K,M,M> B = adjoint(A);

Cofactor [cofactor, cof]

Compute the cofactor of a 2D tensor. The cofactor of a matrix is the transpose of it's adjugate i.e. cof(A)==trans(adj(A)). Works for any order tensor

Tensor<double,M,M> A;
Tensor<double,M,M> B = cofactor(A);
Tensor<double,M,M> C = cof(A);

For higher order tensors it will return the cofactor of matrices of the last two dimensions for instance

Tensor<double,K,M,M> A;
Tensor<double,K,M,M> B = cofactor(A);

Trace [trace]

Compute the trace of a 2D tensor (matrix)

Tensor<double,M,M> A;
double value = trace(A);

For higher order tensors it will return the trace of matrices of the last two dimensions for instance

Tensor<double,K,M,M> A;
Tensor<double,K> values = trace(A);

Tril [tril]

Get the lower triangular matrix

Tensor<double,M,M> A;
Tensor<double,M,M> B = tril(A);
Tensor<double,M,M> C = tril(A,1);

Triu [triu]

Get the upper triangular matrix

Tensor<double,M,M> A;
Tensor<double,M,M> B = tril(A);
Tensor<double,M,M> C = tril(A,-2);

LU Decomposition [lu]

Compute the LU factorisation of a tensor.

Tensor<double,M,M> A;
Tensor<double,M,M> L, U;
lu(A, L, U);

Options can be passed to request the type of LU factorisation

Tensor<double,M,M> A;
lu<LUCompType::BlockLU>(A, L, U);   // [Default] using block LU [fast] 
lu<LUCompType::SimpleLU>(A, U, U);  // Using standard LU [slow]

For performance reasons, LU module does not activate pivoting by default and hence it is your responsibility to ensure that your matrix is not only invertible but also diagonally dominant. If you want to be safe, you can activate partial pivotting by providing the computation option and an empty permutation vector

Tensor<double,M,M> A;
Tensor<double,M,M> L, U;
Tensor<size_t, M> p;                      // Permutation vector
lu<LUCompType::BlockLUPiv>(A, L, U, p);   // [Default] using block LU [fast] 
lu<LUCompType::SimpleLUPiv>(A, U, U, p);  // Using standard LU [slow]

or equivalently providing a permutation matrix

Tensor<double,M,M> A;
Tensor<double,M,M> L, U, P;
lu<LUCompType::BlockLUPiv>(A, L, U, P);   // [Default] using block LU [fast] 
lu<LUCompType::SimpleLUPiv>(A, U, U, P);  // Using standard LU [slow]

The original matrix can then be reconstructed using

auto rA = reconstruct(A, L, U, p); 
assert(norm(rA - A) < 1e-8);

// or with permuation matrix
auto rA = reconstruct(A, L, U, P); 
assert(norm(rA - A) < 1e-8); 

QR Decomposition [qr]

Compute the QR factorisation of a tensor.

Tensor<double,M,N> A;
Tensor<double,M,N> Q;
Tensor<double,N,M> R;
qr(A, Q, R);

Options can be passed to request the type of QR factorisation

Tensor<double,M,M> A;
qr<QRCompType::MGSR>(A, Q, R);  // [Default] using Modified Gram-Schmidt Row-wise (MGSR)
qr<QRCompType::HHR>(A, Q, R);   // Using Householder reflections

Inverse [inverse, inv]

Compute the inverse of a 2D tensor. Works for any order tensor

Tensor<double,M,M> A;
Tensor<double,M,M> B = inverse(A);
Tensor<double,M,M> C = inv(A);

For higher order tensors it will return the inverse of matrices of the last two dimensions for instance

Tensor<double,K,M,M> A;
Tensor<double,K,M,M> B = inverse(A);

Options can be passed to request the type of inverse computation

Tensor<double,M,M> A;
auto iA0 = inverse<InvCompType::BlockLU>(A);      // [Default] using block LU
auto iA1 = inverse<InvCompType::SimpleLUPiv>(A);  // Using simple LU with pivotting
.
.
.

Triangular Inverse [tinverse]

Compute the inverse of a 2D triangular tensor. The 2D tensor can be a lower or an upper triangular matrix. This option is currently only available when combined with InvCompType::SimpleInv

Tensor<double,M,M> A;
Tensor<double,M,M> B1 = tinverse<InvCompType::SimpleInv, UpLoType::Upper>(A);
Tensor<double,M,M> B2 = tinverse<InvCompType::SimpleInv, UpLoType::UniLower>(A);

Solve [solve]

Solve a linear system of equations

Tensor<double,M,M> A;
Tensor<double,M> b;
Tensor<double,M> x = solve(A,b);

Options can be passed to request the type of solution

Tensor<double,M,M> A;
Tensor<double,M,N> B;                               // Multiple right hand sides
auto X = solve<SolveCompType::SimpleInv>(A,B);      // Using safe inversion [fast]
auto X = solve<SolveCompType::BlockLU>(A,B);        // Using block LU [fast]
auto X = solve<SolveCompType::SimpleLUPiv>(A,B);    // Using simple LU with pivotting [slow]
.
.
.