-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cofactor Matrix #218
base: master
Are you sure you want to change the base?
Cofactor Matrix #218
Conversation
Nightly failure is unrelated to the PR. The behavior for the built-in eigenvalue reconstruction seems to have changed. |
Can |
@@ -7,7 +7,7 @@ using Statistics: mean | |||
using LinearAlgebra | |||
using StaticArrays | |||
# re-exports from LinearAlgebra | |||
export ⋅, ×, dot, diagm, tr, det, norm, eigvals, eigvecs, eigen | |||
export ⋅, ×, dot, diagm, tr, det, cof, norm, eigvals, eigvecs, eigen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this come from LinearAlgebra so maybe list separately. Is cof
a standard name for this operation btw?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right. In the continuum mechanics books I read this operation is usually abbreviated with cof
(Holzapfel, Wriggers). German Wikipedia also uses it.
https://de.wikipedia.org/wiki/Minor_(Lineare_Algebra)#Eigenschaften
But possible that this abbreviation is a German thing. If you want something different then we can also use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind cof
, just wondering if it was called that, or something else, in other software. I can't find anything from a quick serach though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only worry here is that some standard library might include cofactors in the future. :D
We could do, but I am not sure about the performance implications. Will add a benchmark. |
If I understood you correctly, then you propose something along this line: @generated function inv_new(t::Tensor{2, dim}) where {dim}
Tt = get_base(t)
idx(i,j) = compute_index(Tt, i, j)
ex = quote
transpose(cof(t))*dinv
end
return quote
$(Expr(:meta, :inline))
dinv = 1 / det(t)
@inbounds return $ex
end
end Seems like this is slightly faster on my machine in 3D. julia> F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))
3×3 Tensor{2, 3, Float64, 9}:
1.1 -0.1 0.25
-0.1 0.9 0.0
0.1 0.0 1.2
julia> F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
2×2 Tensor{2, 2, Float64, 4}:
1.1 -0.1
-0.1 0.9
julia> @btime Tensors.inv_new($F2)
6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
0.918367 0.102041
0.102041 1.12245
julia> @btime inv_new($F3)
11.331 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
0.936281 0.104031 -0.195059
0.104031 1.12267 -0.0216732
-0.0780234 -0.00866927 0.849588
julia> @btime Tensors.inv($F2)
6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
0.918367 0.102041
0.102041 1.12245
julia> @btime Tensors.inv($F3)
11.612 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
0.936281 0.104031 -0.195059
0.104031 1.12267 -0.0216732
-0.0780234 -0.00866927 0.849588 Reproducer using Tensors, BenchmarkTools
F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))
@btime Tensors.inv_new($F2)
@btime Tensors.inv_new($F3)
@btime Tensors.inv($F2)
@btime Tensors.inv($F3) Should I include the change? |
I'm getting julia> @btime inv_new($F2);
2.100 ns (0 allocations: 0 bytes)
julia> @btime inv_new($F3);
6.800 ns (0 allocations: 0 bytes)
julia> @btime Tensors.inv($F2);
1.900 ns (0 allocations: 0 bytes)
julia> @btime Tensors.inv($F3);
4.300 ns (0 allocations: 0 bytes)
julia> @btime myinv($F2);
1.900 ns (0 allocations: 0 bytes)
julia> @btime myinv($F3);
7.307 ns (0 allocations: 0 bytes) with @generated function inv_new(t::Tensor{2, dim}) where {dim}
Tt = Tensors.get_base(t)
idx(i,j) = Tensors.compute_index(Tt, i, j)
ex = quote
transpose(cof(t))*dinv
end
return quote
$(Expr(:meta, :inline))
dinv = 1 / det(t)
@inbounds return $ex
end
end
end
myinv(F) = cof(F)'/det(F); |
I guess the optimizer has some trouble with the transpose statement here. It is also questionable how reliable the measurements are within this regime. |
Could do cof(t::SecondOrderTensor) = _cof(t, Val(false))
inv(t::SecondOrderTensor) = _cof(t, Val(true)) / det(t)
@generated function _cof(t::Tensor{2, dim, T}, ::Val{Transpose}) where {dim, T, Transpose}
Tt = get_base(t)
idx(i,j) = Transpose ? compute_index(Tt, j, i) : compute_index(Tt, i, j)
... but not sure if that is worth the added complexity compared to just duplicating the code... |
This is commonly used to compute the push-forward of surface areas from reference to current configuration in continuum mechanics (Nansons formula).