Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FFLU vs RREF for QQMatrix #1710

Closed
joschmitt opened this issue Apr 10, 2024 · 1 comment · Fixed by #1713
Closed

FFLU vs RREF for QQMatrix #1710

joschmitt opened this issue Apr 10, 2024 · 1 comment · Fixed by #1713

Comments

@joschmitt
Copy link
Collaborator

I recently adjusted the solving functionality over $\mathbb Q$ to use an FFLU decomposition in a solve context. I now noticed that computing an FFLU decomposition is actually often slower, at least for matrices with more rows than columns.

julia> A = matrix(QQ, 100, 80, [ rand(QQ, -10:10) for _ in 1:100*80 ]);

julia> @btime rref(A);
  337.362 μs (15 allocations: 306.53 KiB)

julia> @btime my_fflu(A);
  26.944 ms (652 allocations: 195.39 KiB)

The same is true for matrices with integer coefficients (so no clearing of denominators in the FFLU) and square matrices:

julia> A = matrix(QQ, 100, 80, [ rand(ZZ, -10:10) for _ in 1:100*80 ]);

julia> @btime rref(A);
  160.577 μs (15 allocations: 306.53 KiB)

julia> @btime my_fflu(A);
  9.122 ms (10 allocations: 190.38 KiB)

julia> A = matrix(QQ, 100, 100, [ rand(QQ, -10:10) for _ in 1:100*100 ]);

julia> @btime rref(A);
  434.196 μs (15 allocations: 397.94 KiB)

julia> @btime my_fflu(A);
  45.725 ms (1895 allocations: 252.60 KiB)

Only if there are more columns than rows, FFLU seems to be better (but by not much):

julia> A = matrix(QQ, 80, 100, [ rand(QQ, -10:10) for _ in 1:80*100 ]);

julia> @btime rref(A);
  42.874 ms (942 allocations: 6.74 MiB)

julia> @btime my_fflu(A);
  27.035 ms (10 allocations: 191.00 KiB)

my_fflu is this function:

function my_fflu(A::QQMatrix)
          Aint = zero_matrix(FlintZZ, ncols(A), nrows(A))
          dA = FlintZZ()
          ccall((:fmpq_mat_get_fmpz_mat_matwise, Nemo.libflint), Nothing,
                (Ref{ZZMatrix}, Ref{ZZRingElem}, Ref{QQMatrix}), Aint, dA, transpose(A))
          p = Generic.Perm(ncols(A))
          dLU = ZZ() 
          p.d .-= 1
          r = ccall((:fmpz_mat_fflu, Nemo.libflint), Int, 
                    (Ref{ZZMatrix}, Ref{ZZRingElem}, Ptr{Int}, Ref{ZZMatrix}, Cint),
                    Aint, dLU, p.d, Aint, Cint(false))
          p.d .+= 1
          inv!(p)
          return nothing
end

What do we do now? The easiest would be to use RREF again in the solve context and leave the decision what algorithm to use to flint. (There is some heuristic used https://github.com/flintlib/flint/blob/main/src/fmpz_mat/rref.c)

@thofma
Copy link
Member

thofma commented Apr 10, 2024

What do we do now? The easiest would be to use RREF again in the solve context and leave the decision what algorithm to use to flint. (There is some heuristic used https://github.com/flintlib/flint/blob/main/src/fmpz_mat/rref.c)

Sounds like a good solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants