Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] [ITensors] Implement optional rank reduction for QR/RQ/QL/LQ decompositions #1099

Open
wants to merge 118 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
118 commits
Select commit Hold shift + click to select a range
f10e2d3
Refactor: Breakout index fixing in qr() for empty left/right indices
JanReimers Nov 13, 2022
cd88e80
qr decomp, switch to using combiners for gathering indices.
JanReimers Nov 14, 2022
9de2ff6
Handle empty index collections on Q or R.
JanReimers Nov 16, 2022
dfb66e0
Fix known block sparse failing use case
JanReimers Nov 16, 2022
9752128
Add attribution for Niklas
JanReimers Nov 16, 2022
253b5ec
Add qr test on autoMPO generated block sparse tensors
JanReimers Nov 16, 2022
c43814f
Move block sparse QR into NDTensors layer where it belongs.
JanReimers Nov 16, 2022
24c2ff7
Run the formatter
JanReimers Nov 16, 2022
78c556f
Fix qr overload disambiguation problem in julia 1.6
JanReimers Nov 17, 2022
401eef3
Add flux checks for block sparse qr tests
JanReimers Nov 17, 2022
c6486eb
Implement RQ decomposition.
JanReimers Nov 18, 2022
6e84eb6
Add rq decomp for block sparse matrices
JanReimers Nov 18, 2022
3424827
Add test for positive=true and rq decomp
JanReimers Nov 18, 2022
ecd8931
Implement QL/LQ decompositions
JanReimers Nov 18, 2022
b75f40d
Add tests for ComplexF64 and upper /lower checks for R/L
JanReimers Nov 19, 2022
93c391f
Fix flakey fail of NDTensors unit tests
JanReimers Nov 19, 2022
fd8546c
Endless struggle to get julia to see symbols I *already* put in expor…
JanReimers Nov 19, 2022
3b4962e
Removed unused version qr()
JanReimers Nov 19, 2022
3464d7f
Implement rank reducing QR/RQ/QL/LQ
JanReimers Nov 20, 2022
2a2391e
Pass through kwargs for LQ/QL functions
JanReimers Nov 20, 2022
b9dbda7
Run the formatter
JanReimers Nov 20, 2022
0f8c092
Try and fix lq symbol clash exposed in julia 1.8
JanReimers Nov 21, 2022
53fc8a0
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Nov 21, 2022
d0989cc
Merge from RQQLLQ branch
JanReimers Nov 21, 2022
4b6ed1a
Merge branch 'ITensor:main' into main
JanReimers Nov 26, 2022
d14fb6d
Merge branch 'main' into QXRankReduction
JanReimers Nov 27, 2022
990f8ca
Run formatter on Rank Revial code
JanReimers Nov 29, 2022
67d3594
Augment unit tests to check all index directions are QR decomp
JanReimers Nov 29, 2022
ba0bfdd
Fix QN direction of qx link for rq decomp.
JanReimers Nov 30, 2022
79adbd0
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Nov 30, 2022
46b7571
Fix flux tests to reflect new QN dir fix for rq decomp
JanReimers Nov 30, 2022
ac1e53c
Merge branch 'ITensor:main' into main
JanReimers Nov 30, 2022
3b49821
Merge branch 'main' into QXRankReduction
JanReimers Nov 30, 2022
ce7512c
Fix flux tests to reflect new QN dir fix for rq decomp
JanReimers Dec 2, 2022
ad726dc
Remove direction tests
JanReimers Dec 2, 2022
35690ec
Merge branch 'main' into RQQLLQ
mtfishman Dec 7, 2022
82cef17
Merge branch 'ITensor:main' into main
JanReimers Dec 7, 2022
d05220d
Improvements based on Matt's code review
JanReimers Dec 7, 2022
06bf618
Merge branch 'main' into QXRankReduction
JanReimers Dec 7, 2022
48c5164
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Dec 8, 2022
ce72646
Merge branch 'ITensor:main' into main
JanReimers Jan 11, 2023
5d7efd3
Merge branch 'main' into QXRankReduction
JanReimers Jan 11, 2023
7059446
Use map for testing zero rows.
JanReimers Jan 14, 2023
405014e
Change from using epsrr to rr_cutoff for zero row threshold.
JanReimers Jan 14, 2023
cf7ef37
Merge remote-tracking branch 'origin/main' into BlocksparseQR
JanReimers Mar 2, 2023
2fe0fa3
Merge branch 'BlocksparseQR' into RQQLLQ
JanReimers Mar 2, 2023
e937732
Handle changes to similar() function interface.
JanReimers Mar 2, 2023
01d86f4
Add QR/RQ code to ensure all flux is moved onto R
JanReimers Mar 2, 2023
3ac4267
Implement all but one of Matts code review recommendations
JanReimers Mar 2, 2023
0d0d120
Run the formatter
JanReimers Mar 2, 2023
2a93b6e
Remove NDTensors. qualifiers
JanReimers Mar 2, 2023
1e8830d
Put keyword arguments directly in signatures
JanReimers Mar 2, 2023
84a6f20
Fix flux requirements at the NDTensors level.
JanReimers Mar 3, 2023
cf722ef
Format
JanReimers Mar 4, 2023
7e39a55
Fix double swap of Q & L in the ql() function
JanReimers Mar 4, 2023
eb955be
Use generic function for most of the qr/ql operations.
JanReimers Mar 6, 2023
df6c07f
Implement core qx functions accepting both Linds and Rinds
JanReimers Mar 7, 2023
9223523
Format
JanReimers Mar 7, 2023
462ffbf
Stri[ out extra indices in Linds, not in A.
JanReimers Mar 7, 2023
9f80aa2
Move Heisenberg and Hubbards MPO QR tests over legacy area.
JanReimers Mar 7, 2023
f931cbe
Format
JanReimers Mar 7, 2023
a31fb5c
Strip out extra indices in Linds, not in A.
JanReimers Mar 7, 2023
492443c
Move Heisenberg and Hubbards MPO QR tests over legacy area.
JanReimers Mar 7, 2023
35e560f
Format
JanReimers Mar 7, 2023
9128de9
NDTensors unit, switch rq to ql decomp.
JanReimers Mar 8, 2023
82c18f1
Merge remote-tracking branch 'origin/RQQLLQ' into RQQLLQ
JanReimers Mar 8, 2023
9e86676
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 8, 2023
cb1f0d1
Finish merge and get all decomp unit test working.
JanReimers Mar 8, 2023
32972cc
Use more likely to pass a String rather than a tagset
JanReimers Mar 8, 2023
aa29914
Format
JanReimers Mar 8, 2023
ca39f9c
Merge remote-tracking branch 'origin/main' into RQQLLQ
JanReimers Mar 8, 2023
d4709c4
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 8, 2023
858e070
Fix bug in ql_positive routine
JanReimers Mar 8, 2023
885dfe9
Remove some debug code.
JanReimers Mar 9, 2023
5ee47bf
Fix: UndefVarError: tensor not defined
JanReimers Mar 9, 2023
420241d
Qualify tensor()
JanReimers Mar 9, 2023
8596e72
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 9, 2023
5e72caa
Stop using Printf at the NDTensors level.
JanReimers Mar 10, 2023
2b10908
Enhance unit tests for qr/ql decomp
JanReimers Mar 11, 2023
3b5072d
Format
JanReimers Mar 12, 2023
59542df
Don't assume lapack qr/ql returns reals on the R/L diagonals
JanReimers Mar 12, 2023
042578b
Avoid type piracy in unit test code
JanReimers Mar 15, 2023
e6ff0e6
Remove unnessecary usage of where {ElT}
JanReimers Mar 15, 2023
af73844
Pass tags as a keyward argument. And format.
JanReimers Mar 15, 2023
e191c1d
Merge remote-tracking branch 'origin/main' into RQQLLQ
JanReimers Mar 15, 2023
6d3f6b3
Use new randomTensor(ElT,tuple) interface
JanReimers Mar 15, 2023
4a16f3d
Eliminate where{IndsT} for dense qr/ql
JanReimers Mar 15, 2023
11d43a4
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Mar 15, 2023
49df0be
Merge remote-tracking branch 'origin/RQQLLQ' into QXRankReduction
JanReimers Mar 16, 2023
e53c85b
Handle zero pivots gracefully with the new sign(diag) code.
JanReimers Mar 16, 2023
04cccd7
Remove where {T} for low level ql routine.
JanReimers Mar 16, 2023
b22f8a3
Format
JanReimers Mar 16, 2023
43d914f
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 16, 2023
3a8159c
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Mar 17, 2023
0492071
Clean up and comment code.
JanReimers Mar 20, 2023
62ed9cc
Format and add rr-verbose flag for optional rank reduction output.
JanReimers Mar 20, 2023
0ec04b7
Remove the old test/decomp/jl file
JanReimers Mar 21, 2023
1961520
Set Random seed to keeps tests deterministic.
JanReimers Mar 21, 2023
491b830
Can't use Random on the NDTensors CI machine
JanReimers Mar 21, 2023
9aff488
Merge branch 'main' into QXRankReduction
mtfishman Apr 3, 2023
f8976f9
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Apr 13, 2023
fdce8fe
Fix names cutoff and verbose
JanReimers Apr 13, 2023
543d355
Unify positive gauge fix for qr/ql
JanReimers Apr 13, 2023
4d22400
Implement colunm pivoting with row removal
JanReimers Apr 13, 2023
1540aac
Format
JanReimers Apr 13, 2023
0cdc115
Fix some unit test fails.
JanReimers Apr 13, 2023
9da7b54
For column Pivot QR, return column permutation arrays.
JanReimers Apr 20, 2023
a954c54
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Apr 20, 2023
be0074d
Support atol/rtol interface for column pivot qr
JanReimers Apr 24, 2023
a86f4cf
Support the new NoPivot, ColumnNorm types
JanReimers Apr 24, 2023
cde6dad
Support Matts proposed interface
JanReimers Apr 25, 2023
c790493
Support ql decomp using the MatrixFactorizations package
JanReimers Apr 25, 2023
9fa3dd3
Try to avoid warnings in julia 1.8.5
JanReimers Apr 26, 2023
afe67fd
Format
JanReimers Apr 26, 2023
99abd4d
Try and avoid breaking ITensorGPU
JanReimers Apr 26, 2023
6091157
Try and avoid breaking ITensorGPU
JanReimers Apr 26, 2023
e33dd47
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers May 22, 2023
6b747dd
Switch from returning permutations to returning Rp.
JanReimers May 24, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions NDTensors/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ Folds = "41a02a25-b8f0-4f67-bc48-60067656b558"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MatrixFactorizations = "a3b82374-2e81-5b9e-98ce-41277c0e4c87"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
SimpleTraits = "699a6c99-e7fa-54fc-8d76-47d257e15c1d"
Expand All @@ -29,6 +30,7 @@ FLoops = "0.2.1"
Folds = "0.2.8"
Functors = "0.2, 0.3, 0.4"
HDF5 = "0.14, 0.15, 0.16"
MatrixFactorizations = "0.9.6"
Requires = "1.1"
SimpleTraits = "0.9.4"
SplitApplyCombine = "1.2.2"
Expand Down
1 change: 1 addition & 0 deletions NDTensors/src/NDTensors.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ using FLoops
using Folds
using Random
using LinearAlgebra
using MatrixFactorizations
using StaticArrays
using Functors
using HDF5
Expand Down
24 changes: 17 additions & 7 deletions NDTensors/src/blocksparse/linearalgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -304,28 +304,34 @@ qr(T::BlockSparseTensor{<:Any,2}; kwargs...) = qx(qr, T; kwargs...)
# This code thanks to Niklas Tausendpfund
# https://github.com/ntausend/variance_iTensor/blob/main/Hubig_variance_test.ipynb
#
function qx(qx::Function, T::BlockSparseTensor{<:Any,2}; kwargs...)
function qx(
qx::Function, T::BlockSparseTensor{<:Any,2}; block_rtol=-1.0, return_Rp=false, kwargs...
)
ElT = eltype(T)
# getting total number of blocks
nnzblocksT = nnzblocks(T)
nzblocksT = nzblocks(T)

Qs = Vector{DenseTensor{ElT,2}}(undef, nnzblocksT)
Xs = Vector{DenseTensor{ElT,2}}(undef, nnzblocksT)
if return_Rp
Xps = Vector{DenseTensor{ElT,2}}(undef, nnzblocksT)
end

for (jj, b) in enumerate(eachnzblock(T))
blockT = blockview(T, b)
QXb = qx(blockT; kwargs...) #call dense qr at src/linearalgebra.jl 387
QXb = qx(blockT; rtol=block_rtol, return_Rp, kwargs...) #call dense qr at src/linearalgebra.jl 387

if (isnothing(QXb))
return nothing
end

Q, X = QXb
Qs[jj] = Q
Xs[jj] = X
Qs[jj] = QXb[1]
Xs[jj] = QXb[2]
if return_Rp
Xps[jj] = QXb[3]
end
end

#
# Make the new index connecting Q and R
#
Expand All @@ -352,13 +358,17 @@ function qx(qx::Function, T::BlockSparseTensor{<:Any,2}; kwargs...)

Q = BlockSparseTensor(ElT, undef, nzblocksQ, indsQ)
X = BlockSparseTensor(ElT, undef, nzblocksX, indsX)
Xp = return_Rp ? BlockSparseTensor(ElT, undef, nzblocksX, indsX) : nothing

for n in 1:nnzblocksT
blockview(Q, nzblocksQ[n]) .= Qs[n]
blockview(X, nzblocksX[n]) .= Xs[n]
if return_Rp
blockview(Xp, nzblocksX[n]) .= Xps[n]
end
end

return Q, X
return Q, X, Xp
end

function exp(
Expand Down
4 changes: 4 additions & 0 deletions NDTensors/src/exports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -80,3 +80,7 @@ export

# linearalgebra.jl
qr

if VERSION >= v"1.7"
export RowNorm
end
2 changes: 2 additions & 0 deletions NDTensors/src/imports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,6 @@ import Adapt: adapt_structure, adapt_storage

import LinearAlgebra: diag, exp, norm, qr

import MatrixFactorizations: ql, rq

import TupleTools: isperm
256 changes: 173 additions & 83 deletions NDTensors/src/linearalgebra/linearalgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -366,39 +366,180 @@ function LinearAlgebra.eigen(
V = complex(tensor(Dense(vec(VM)), Vinds))
return D, V, spec
end
#
# Trim out n rows of R based on norm(R_nn)<cutoff, where R_nn is bottom n rows of R.
# Also trim the corresponding columns of Q.
#
function trim_rows(
Q::AbstractMatrix, R::AbstractMatrix, atol::Float64, rtol::Float64; verbose=false
)
nr = size(R, 1)
@assert size(Q, 2) == nr #Sanity check.
#
# Find the largest n such than norm(R_nn)<=cutoff, where Rnn if the bottom right block with rows
# from n:nr. n=last_row_to_keep+1
#
last_row_to_keep = nr
do_atol, do_rtol = atol >= 0.0, rtol >= 0.0
# for r in nr:-1:1
# Rnn=norm(R[r:nr, :])
# R11=norm(R[1:r-1, :])
# if (do_atol && Rnn > atol) || (do_rtol && Rnn/R11 > rtol)
# last_row_to_keep = r
# break
# end
# end
#
# Could also do the same test but only looking at the diagonals
#
dR = diag(R)
for r in nr:-1:1
Rnn = norm(dR[r:nr])
R11 = norm(dR[1:(r - 1)])
if (do_atol && Rnn > atol) || (do_rtol && Rnn / R11 > rtol)
last_row_to_keep = r
break
end
end

num_zero_rows = nr - last_row_to_keep
if num_zero_rows == 0
verbose &&
println("Rank Reveal removing $num_zero_rows rows with atol=$atol, rtol=$rtol")
return Q, R
end
#
# Useful output for trouble shooting.
#
if verbose
println("Rank Reveal removing $num_zero_rows rows with atol=$atol, rtol=$rtol")
end

return Q[:, 1:last_row_to_keep], R[1:last_row_to_keep, :]
end

if VERSION >= v"1.7"
struct RowNorm end #for row pivoting lq
end

qr(T::DenseTensor{<:Any,2}; kwargs...) = qx(qr, T; kwargs...)
ql(T::DenseTensor{<:Any,2}; kwargs...) = qx(ql, T; kwargs...)

function qr(T::DenseTensor{<:Any,2}; positive=false, kwargs...)
qxf = positive ? qr_positive : qr
return qx(qxf, T; kwargs...)
# LinearAlgebra qr takes types like Val(true) of ColumnNorm for control of pivoting.
# We some helper functions to deal with all these types changing between julia versions.
#
pivot_to_Bool(pivot::Bool)::Bool = pivot
pivot_to_Bool(::Val{false})::Bool = false
pivot_to_Bool(::Val{true})::Bool = true
if VERSION < v"1.7"
call_pivot(bpivot::Bool, ::Function) = Val(bpivot)
end
function ql(T::DenseTensor{<:Any,2}; positive=false, kwargs...)
qxf = positive ? ql_positive : ql
return qx(qxf, T; kwargs...)
if VERSION >= v"1.7"
pivot_to_Bool(pivot::NoPivot)::Bool = false
pivot_to_Bool(pivot::ColumnNorm)::Bool = true
pivot_to_Bool(pivot::RowNorm)::Bool = true
function call_pivot(bpivot::Bool, qx::Function)
if qx == qr
return bpivot ? ColumnNorm() : NoPivot() # LinearAlgebra
else
return Val(bpivot) # MatrixFactorizations
end
end
end

matrix(Q::LinearAlgebra.QRCompactWYQ) = Matrix(Q)
function matrix(Q::MatrixFactorizations.QLPackedQ)
n, m = size(Q.factors)
if n <= m
return Matrix(Q)
else
return Q * Matrix(LinearAlgebra.I, m, m)
end
end
#
# Generic function for qr and ql decomposition of dense matrix.
# The X tensor = R or L.
#
function qx(qx::Function, T::DenseTensor{<:Any,2}; kwargs...)
QM, XM = qx(matrix(T))
# Be aware that if positive==false, then typeof(QM)=LinearAlgebra.QRCompactWYQ, not Matrix
# It gets converted to matrix below.
# Make the new indices to go onto Q and R
q, r = inds(T)
q = dim(q) < dim(r) ? sim(q) : sim(r)
IndsT = indstype(T) #get the index type
function qx(
qx::Function,
T::DenseTensor{<:Any,2};
positive=false,
pivot=call_pivot(false, qx),
atol=-1.0, #absolute tolerance for rank reduction
rtol=-1.0, #relative tolerance for rank reduction
block_rtol=-1.0, #This is supposed to be for block sparse, but we reluctantly accept it here.
return_Rp=false,
verbose=false,
kwargs...,
)
bpivot = pivot_to_Bool(pivot) #We need a simple bool for making decisions below.

if rtol < 0.0 && block_rtol >= 0.0
rtol = block_rtol
end
do_rank_reduction = (atol >= 0.0) || (rtol >= 0.0)
if do_rank_reduction && qx == ql
@warn "User requested rq/ql decomposition with atol=$atol, rtol=$rtol." *
" Rank reduction requires column/row pivoting which is not supported for rq/ql decomposition in lapack/ITensors"
do_rank_reduction = false
end
if bpivot && qx == ql
@warn "User requested rq/ql decomposition with row/column pivoting." *
" Pivoting is not supported for rq/ql decomposition in lapack/ITensors"
bpivot = false
end
if do_rank_reduction #if do_rank_reduction==false then don't change bpivot.
bpivot = true
end
if !bpivot && return_Rp
@warn "User requested return of Rp matrix with no pivoting." *
" Please eneable QR/LQ with pivoting to return the Rp matrix."
return_Rp = false
end

pivot = call_pivot(bpivot, qx) #Convert the bool to whatever type the qx function expects.
if bpivot
QM, XM, p = qx(matrix(T), pivot) #with colun pivoting
QM, XM = trim_rows(Matrix(QM), XM, atol, rtol; verbose=verbose)
else
QM, XM = qx(matrix(T), pivot) #no column pivoting
QM = matrix(QM)
p = nothing
end
#
# Gauge fix diagonal of X into positive definite form.
#
positive && qx_positive!(qx, QM, XM)
#
# undo the permutation on R, so the T=Q*R again.
#
if bpivot
if return_Rp
XMp = XM # If requested save the permuted columns version of X
end
XM = XM[:, invperm(p)] # un-permute the columns of X
end
#
# Make the new indices to go onto Q and X
#
IndsT = indstype(T) #get the indices type
@assert IndsT.parameters[1] == IndsT.parameters[2] #they better be the same!
IndexT = IndsT.parameters[1] #establish the single index type.
q = IndexT(size(XM)[1]) #create the Q--X link index.
Qinds = IndsT((ind(T, 1), q))
Xinds = IndsT((q, ind(T, 2)))
Q = tensor(Dense(vec(Matrix(QM))), Qinds) #Q was strided
Q = tensor(Dense(vec(QM)), Qinds)
X = tensor(Dense(vec(XM)), Xinds)
return Q, X
if return_Rp
Xp = tensor(Dense(vec(XMp)), Xinds)
else
Xp = nothing
end

return Q, X, Xp
end

#
# Just flip signs between Q and R to get all the diagonals of R >=0.
# For rectangular M the indexing for "diagonal" is non-trivial.
#
# Required by svd_recursive
"""
qr_positive(M::AbstractMatrix)

Expand All @@ -423,74 +564,23 @@ function qr_positive(M::AbstractMatrix)
return (Q, R)
end

"""
ql_positive(M::AbstractMatrix)

Compute the QL decomposition of a matrix M
such that the diagonal elements of L are
non-negative. Such a QL decomposition of a
matrix is unique. Returns a tuple (Q,L).
"""
function ql_positive(M::AbstractMatrix)
sparseQ, L = ql(M)
Q = convert(Matrix, sparseQ)
nr, nc = size(L)
dc = nc > nr ? nc - nr : 0 #diag is shifted over by dc if nc>nr
for c in 1:(nc - dc)
if L[c, c + dc] != 0.0 #sign(0.0)==0.0 so we don't want to zero out a column of Q.
sign_Lc = sign(L[c, c + dc])
if c <= nr && !isone(sign_Lc)
L[c, 1:(c + dc)] *= sign_Lc #only fip non-zero portion of the column.
Q[:, c] *= conj(sign_Lc)
end
end
end
return (Q, L)
end

#
# Lapack replaces A with Q & R carefully packed together. So here we just copy a
# before letting lapack overwirte it.
#
function ql(A::AbstractMatrix; kwargs...)
Base.require_one_based_indexing(A)
T = eltype(A)
AA = similar(A, LinearAlgebra._qreltype(T), size(A))
copyto!(AA, A)
return ql!(AA; kwargs...)
end
#
# This is where the low level call to lapack actually occurs. Most of the work is
# about unpacking Q and L from the A matrix.
# Semi generic function for gauge fixing the diagonal of X into positive definite form.
# becuase the diagonal is difficult to locate for rectangular X (it moves between R and L)
# we use qx==ql to know if X is lower or upper.
#
function ql!(A::StridedMatrix{<:LAPACK.BlasFloat})
tau = Base.similar(A, min(size(A)...))
x = LAPACK.geqlf!(A, tau)
#save L from the lower portion of A, before orgql! mangles it!
nr, nc = size(A)
mn = min(nr, nc)
L = similar(A, (mn, nc))
for r in 1:mn
for c in 1:(r + nc - mn)
L[r, c] = A[r + nr - mn, c]
end
for c in (r + 1 + nc - mn):nc
L[r, c] = 0.0
end
end
# Now we need shift the orth vectors from the right side of Q over the left side, before
if (mn < nc)
for r in 1:nr
for c in 1:mn
A[r, c] = A[r, c + nc - mn]
function qx_positive!(qx::Function, Q::AbstractMatrix, X::AbstractMatrix)
nr, nc = size(X)
dc = (nc > nr && qx == ql) ? nc - nr : 0 #diag is shifted over by dc if nc>nr
for c in 1:Base.min(nr, nc)
if X[c, c + dc] != 0.0 #sign(0.0)==0.0 so we don't want to zero out a column of Q.
sign_Xc = sign(X[c, c + dc])
if !isone(sign_Xc)
X[c, :] *= sign_Xc
Q[:, c] *= conj(sign_Xc)
end
end
for r in 1:nr
A = A[:, 1:mn] #whack the extra columns in A.
end
end
LAPACK.orgql!(A, tau)
return A, L
end

# TODO: support alg keyword argument to choose the svd algorithm
Expand Down
Loading