Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unitful linear algebra #46

Open
ChrisRackauckas opened this issue Nov 10, 2016 · 18 comments
Open

Unitful linear algebra #46

ChrisRackauckas opened this issue Nov 10, 2016 · 18 comments

Comments

@ChrisRackauckas
Copy link
Contributor

ChrisRackauckas commented Nov 10, 2016

Some linear algebra operations like \ have a hard time with unitful numbers. Take the following case:

a = [1.0u"N" 2.0u"N"
      3.0u"N" 1.0u"N"]
b = [1.0u"N";3.0u"N"]

If you try \ it gives a dimension error. If you try * then it uses the generic * fallback and not BLAS. The same is true if you use:

A = rand(100,100)u"N"
b = rand(100)u"N"

I suggest trying something like, performing a dimensional analysis, strip the units, use the general * or \, and then re-apply units.

To do this properly, you might need a UnitfulArray instead of an array of Unitful values. I am not sure if stripping the units on an array of Unitful numbers can be done without making a temporary.

@giordano
Copy link
Collaborator

giordano commented Dec 7, 2016

Indeed the algorithm used in generic_lufact! doesn't look very physical-units-friendly:

Akkinv = inv(A[k,k])
for i = k+1:m
    A[i,k] *= Akkinv
end

Other types for which T and T*inv(T) don't have the same type would fail at this point. As a physicist, I expect (or hope) that a function performs dimensionally consistent computations. It may be worth trying looking into the upstream factorization method.

@briochemc
Copy link
Contributor

Is there a plan to eventually resolve this issue or was this given up on?

@cstjean
Copy link
Contributor

cstjean commented Jun 3, 2019

Depending on your use case, #191 might work? I don't know if I'll ever find the time to finish it, though.

@briochemc
Copy link
Contributor

Thanks for the suggestion! Unfortunately it does not really help me because my use case involves large sparse LU factorizations (factorize, ldiv!, etc.). I also outlined my own dirty workaround in #150, but I'm not sure it is a good one at all (no one commented on it). I realize discussion on this has been going for a while (e.g., JuliaLang/julia#20484 and JuliaLang/julia#7623), but I am unaware of any progress regarding the issue here, so I wanted to revive this issue a bit! 🙏

@timholy
Copy link
Contributor

timholy commented Jun 3, 2019

The idea of stripping the units seems a bit limiting. For example this matrix has an inverse:

julia> using Unitful: mm, s, kg

julia> A = [2mm^2 1mm*s; 1mm*s 2s^2]
2×2 Array{Quantity{Int64,D,U} where U where D,2}:
 2 mm^2  1 mm s
 1 mm s   2 s^2

but this one doesn't:

julia> A = [2mm^2 1kg; 1kg 2s^2]
2×2 Array{Quantity{Int64,D,U} where U where D,2}:
 2 mm^2   1 kg
   1 kg  2 s^2

I think you'd want to find that out, by having the code throw an error when you tried to compute the factorization. I guess you could run a check at the beginning to see, and if it looks OK then you could trip units, compute the factorization, and assign the units retrospectively.

But alternatively you could just write a unit-respecting implementation of LU factorization. Given that LU factorization is not a complicated algorithm, someone who wants this enough (@briochemc?) should just sit down and figure it out. The biggest problem, of course, is that it won't be inferrable if each entry has different units. So indeed you might want a special case for homogeneous arrays that leverages BLAS. Compared to computing the factorization, taking a copy or two is probably nothing (O(N^2) vs O(N^3)).

@briochemc
Copy link
Contributor

Thanks for your comment. I think I understand and agree 😄 Although — and I may be wrong — I think the stripping is mandatory for many applications. In particular any case requiring sparse arrays will also require BLAS (and efficient sparse factorizations ala SuiteSparse) and have homogeneous units.

For example, my most typical case is a discretized differential operator represented by a sparse matrix A with unit u"1/s" (with the size of A typically in the order of 1'000'000 x 1'000'000, with order 10 non-zeros per row/column). For this case, SuiteSparse and BLAS are mandatory, right? Also, I only factorize A in order to subsequently solve linear systems (or use in iterative solvers), so I am not that much concerned about how the unit is stored in the factors (in Af = factorize(A)) and would be happy to have them carried along the Factorization object, i.e., without specifying the unit of each factor. (Maybe this is simpler to implement?) For me, it only matters that ldiv!(A, b) or ldiv!(Af, b) have the right unit, i.e., that of b divided by that of A (again assuming all entries of A and b have the same unit).

@timholy
Copy link
Contributor

timholy commented Jun 3, 2019

Yes, if you have homogeneous arrays then you will definitely get better performance using SuiteSparse and BLAS. In theory we're getting close to being able to write both libraries in Julia, with the same performance that the Fortran/C versions have, but I fear that's not a small project and might require more work on the compiler.

@briochemc
Copy link
Contributor

Fishing for wisdom here: In your opinion, is it worth it to have a separate UnitfulSparse.jl package with Unitful sparse arrays in the vein of what Chris suggested? (I.e., that would strip and reattach units to allow sparse linear algebra operations)

@timholy
Copy link
Contributor

timholy commented Jun 4, 2019

It might. Perhaps it would depend on how big of a deal it is. If it turns out to be quite short, perhaps adding it here would be better, but if it ends up being a big thing then I'd say a separate package makes a lot of sense.

@cstjean
Copy link
Contributor

cstjean commented Jun 4, 2019

Unfortunately it does not really help me because my use case involves large sparse LU factorizations (factorize, ldiv!, etc.).

I believe it ought to be possible to design a UnitfulArray{A,U} type such that it supports sparse/dense matrices, and homogeneous/inhomogeneous units. My PR isn't far off:

UnitfulArray{T, N, A<:AbstractArray{T, N}, U<:NTuple{N, TupleOf{Units}}} <: AbstractArray{Quantity{T}, N}

If the units U were allowed to be Homogeneous(u"m^2") (or maybe a FillArray.Fill), then we could support your use case and mine with the same code.

@briochemc
Copy link
Contributor

I have another suggestion that probably is dumb, but here I go. When a matrix M and vector x with units have a concrete type (which I think means that they have a homgeneous unit), then why not use that as a check to strip and reattach units when used with \? This first step does not require a new UnitfulArray type, right? E.g., :

julia> using SparseArrays, LinearAlgebra, SuiteSparse

julia> using Unitful

julia> T = sprand(10, 10, 0.5) +  I ;

julia> T = T * u"1/s" ;

julia> x = rand(10) * u"mmol/m^3" ;

julia> elunit(M::SparseMatrixCSC{U, Int}) where U = unit(U)
elunit (generic function with 1 method)

julia> elunit(x::AbstractVecOrMat{U}) where U = unit(U)
elunit (generic function with 2 methods)

julia> function LinearAlgebra.:\(M::SparseMatrixCSC{U, Int}, x::AbstractVecOrMat{V}) where {U<:Quantity, V<:Quantity}
           if ~isconcretetype(U) || ~isconcretetype(V)
               error("No mixed units for backslash")
           else
               return (ustrip.(M) \ ustrip(x)) * (elunit(M) / elunit(x))
           end
       end

julia> T \ x
10-element Array{Quantity{Float64,𝐋^3*𝐍^-1*𝐓^-1,Unitful.FreeUnits{(m^3, mmol^-1, s^-1),𝐋^3*𝐍^-1*𝐓^-1,nothing}},1}:
   0.5218075428846516 m^3 mmol^-1 s^-1
 -0.37126881733087075 m^3 mmol^-1 s^-1
  -0.5717943786412751 m^3 mmol^-1 s^-1
   0.6654072332256958 m^3 mmol^-1 s^-1
   -0.489779363513709 m^3 mmol^-1 s^-1
  -0.6341968692366408 m^3 mmol^-1 s^-1
   0.8891121153540674 m^3 mmol^-1 s^-1
   1.4828216212166003 m^3 mmol^-1 s^-1
  0.24399317132721166 m^3 mmol^-1 s^-1
  -0.1969247224331648 m^3 mmol^-1 s^-1

@briochemc
Copy link
Contributor

Actually maybe simpler would be to just let elunit do the work, and only have

julia> using SparseArrays, LinearAlgebra, SuiteSparse

julia> using Unitful

julia> T = (sprand(10, 10, 0.5) + I) * u"1/s" ;

julia> x = rand(10) * u"mmol/m^3" ;

julia> elunit(M::SparseMatrixCSC{U, Int}) where U = unit(U)
elunit (generic function with 1 method)

julia> elunit(x::AbstractVecOrMat{U}) where U = unit(U)
elunit (generic function with 2 methods)

julia> LinearAlgebra.:\(M::SparseMatrixCSC{<:Quantity, Int}, x::AbstractVecOrMat{<:Quantity}) = (ustrip.(M) \ ustrip(x)) * (elunit(M) / elunit(x))

julia> T \ x
10-element Array{Quantity{Float64,𝐋^3*𝐍^-1*𝐓^-1,Unitful.FreeUnits{(m^3, mmol^-1, s^-1),𝐋^3*𝐍^-1*𝐓^-1,nothing}},1}:
  -1.592883735567312 m^3 mmol^-1 s^-1
  0.7810705179032268 m^3 mmol^-1 s^-1
   0.506011915170795 m^3 mmol^-1 s^-1
  0.9385158244984871 m^3 mmol^-1 s^-1
 -0.7054928263619203 m^3 mmol^-1 s^-1
 0.39630007844679466 m^3 mmol^-1 s^-1
 -1.3477024041987156 m^3 mmol^-1 s^-1
  1.0284718040639782 m^3 mmol^-1 s^-1
  2.4997855862739455 m^3 mmol^-1 s^-1
   -1.36216381582195 m^3 mmol^-1 s^-1

Please let me know if this sort of option is dumb or inefficient!

@briochemc
Copy link
Contributor

BTW with all the movement I am unsure: should I keep discussing here or open another issue on the https://github.com/rigetti/Unitful.jl fork?

@giordano
Copy link
Collaborator

For the time being I'd avoid splitting the discussion and continuing in rigetti/Unitful.jl, I believe the situation is still unclear

@goretkin
Copy link

I want to bring attention to my rough and partial attempt for Unitful linear algebra: https://github.com/goretkin/UnitfulLinearAlgebra.jl

@mcabbott
Copy link
Contributor

mcabbott commented Apr 12, 2021

Maybe worth mentioning here that, at least for the simplest case in which you have dense arrays of homogeneous units, you can just reinterpret them:

function LinearAlgebra.:\(A::Matrix{Q}, b::Vector{Q}) where {Q<:Quantity{Float64}}
    A0, b0 = reinterpret(Float64, A), reinterpret(Float64, b)
    A0 \ b0
end

@test A \ b isa StridedVector{Float64}

This doesn't need to make a copy, nor define a UnitfulArray. It's probably what you would want to do in simple BLAS-friendly cases, even if you had a better generic routine.

(And clearly it could be done a bit more carefully, maybe for any StridedArray{<:HWNumber}, and if the units don't cancel then reinterpreting the result... )

@singularitti
Copy link
Contributor

Any idea when it will be solved?

@RomeoV
Copy link

RomeoV commented Dec 6, 2022

Here's a slight improvement on @mcabbott's answer, given that both the matrix and the vector have homogeneous, but not equal units.

function LinearAlgebra.:\(A::Matrix{Q}, b::Vector{V}) where {Q<:Quantity{Float64}, V<:Quantity{Float64}}
  ustrip(A)\ustrip(b) * (unit(b[1]) / unit(A[1,1]))
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants