You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently adjusted the solving functionality over $\mathbb Q$ to use an FFLU decomposition in a solve context. I now noticed that computing an FFLU decomposition is actually often slower, at least for matrices with more rows than columns.
julia> A = matrix(QQ, 100, 80, [ rand(QQ, -10:10) for _ in 1:100*80 ]);
julia> @btime rref(A);
337.362 μs (15 allocations: 306.53 KiB)
julia> @btime my_fflu(A);
26.944 ms (652 allocations: 195.39 KiB)
The same is true for matrices with integer coefficients (so no clearing of denominators in the FFLU) and square matrices:
julia> A = matrix(QQ, 100, 80, [ rand(ZZ, -10:10) for _ in 1:100*80 ]);
julia> @btime rref(A);
160.577 μs (15 allocations: 306.53 KiB)
julia> @btime my_fflu(A);
9.122 ms (10 allocations: 190.38 KiB)
julia> A = matrix(QQ, 100, 100, [ rand(QQ, -10:10) for _ in 1:100*100 ]);
julia> @btime rref(A);
434.196 μs (15 allocations: 397.94 KiB)
julia> @btime my_fflu(A);
45.725 ms (1895 allocations: 252.60 KiB)
Only if there are more columns than rows, FFLU seems to be better (but by not much):
julia> A = matrix(QQ, 80, 100, [ rand(QQ, -10:10) for _ in 1:80*100 ]);
julia> @btime rref(A);
42.874 ms (942 allocations: 6.74 MiB)
julia> @btime my_fflu(A);
27.035 ms (10 allocations: 191.00 KiB)
my_fflu is this function:
function my_fflu(A::QQMatrix)
Aint = zero_matrix(FlintZZ, ncols(A), nrows(A))
dA = FlintZZ()
ccall((:fmpq_mat_get_fmpz_mat_matwise, Nemo.libflint), Nothing,
(Ref{ZZMatrix}, Ref{ZZRingElem}, Ref{QQMatrix}), Aint, dA, transpose(A))
p = Generic.Perm(ncols(A))
dLU = ZZ()
p.d .-= 1
r = ccall((:fmpz_mat_fflu, Nemo.libflint), Int,
(Ref{ZZMatrix}, Ref{ZZRingElem}, Ptr{Int}, Ref{ZZMatrix}, Cint),
Aint, dLU, p.d, Aint, Cint(false))
p.d .+= 1
inv!(p)
return nothing
end
I recently adjusted the solving functionality over$\mathbb Q$ to use an FFLU decomposition in a solve context. I now noticed that computing an FFLU decomposition is actually often slower, at least for matrices with more rows than columns.
The same is true for matrices with integer coefficients (so no clearing of denominators in the FFLU) and square matrices:
Only if there are more columns than rows, FFLU seems to be better (but by not much):
my_fflu
is this function:What do we do now? The easiest would be to use RREF again in the solve context and leave the decision what algorithm to use to flint. (There is some heuristic used https://github.com/flintlib/flint/blob/main/src/fmpz_mat/rref.c)
The text was updated successfully, but these errors were encountered: