Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve 2x2 eigen #694

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open

Improve 2x2 eigen #694

wants to merge 5 commits into from

Conversation

tfgast
Copy link

@tfgast tfgast commented Nov 27, 2019

Using the fact that the eigenvectors must be orthonormal, v21 and v22, can be computed from v21 and v11. In addition to being more performant, this is actually more robust in some cases, for instance with [1.0 -9.465257061381386e-20; -9.465257061381386e-20 1.0], the previous code would return duplicated eigenvectors.

@c42f c42f added linear-algebra numerical-robustness numerical robustness and accuracy labels Nov 27, 2019
@coveralls
Copy link

coveralls commented Nov 27, 2019

Coverage Status

Coverage decreased (-0.04%) to 81.857% when pulling 54afe61 on tfgast:master into 103e9d4 on JuliaArrays:master.

@c42f
Copy link
Member

c42f commented Nov 27, 2019

Looks great to me, thanks!

Could you add a test or tests which cover the cases which the previous code got wrong? Is this characterized by the case where the off diagonal elements are less than eps(eltype(A))?

Would this fix also allow us to remove the special case for where A is detected to be diagonal? Just looking at the original code, the diagonal test seems unlikely to be numerically reliable... and ultimately the root cause of this bug.

@c42f c42f added the bugfix label Nov 27, 2019
@tfgast
Copy link
Author

tfgast commented Dec 12, 2019

I've improved the handling in the case of underflow as well. I wasn't able to remove the special case for diagonal matrices, as we still need to avoid dividing by zero. I've also added some more tests.

Copy link
Member

@c42f c42f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, no worries I'm happy to merge this more or less as is. Though I'd like it if you could clarify which cases you are targeting with the underflow tests (so we know what they are meant to test in the future).

test/eigen.jl Outdated
m2 = SMatrix{2,2}(m2_a)
@test (@inferred_maybe_allow SVector{2,ComplexF64} eigvals(m1, m2)) ≈ eigvals(m1_a, m2_a)
@test (@inferred_maybe_allow SVector{2,ComplexF64} eigvals(Symmetric(m1), Symmetric(m2))) ≈ eigvals(Symmetric(m1_a), Symmetric(m2_a))
# issue #523
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# issue #523
# issues #523, #694

test/eigen.jl Outdated
@test eigvecs(E) * SDiagonal(eigvals(E)) * eigvecs(E)' ≈ A
A = SMatrix{2,2,Float64}((i, pfmin, pfmin, j))
E = eigen(Symmetric(A, uplo))
@test eigvecs(E) * SDiagonal(eigvals(E)) * eigvecs(E)' ≈ A
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, so can you add some comment about exactly which cases we are straddling here by examining the boundary between pfmin vs nfmin? We seem to have have x^2 == 0 for both of those and i ± x == 1.0, i*j ± x == i*j etc...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to merge this, but I'm unsure whether these tests are testing the right thing. Do you have some thoughts on the above query?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into it again. You were right, they were testing the wrong thing. When I tested the right thing it wasn't working properly. I put some time in to make it work and simplify things, mostly by using the hypot function which already has a lot of work put into making it accurate.

@c42f c42f added this to the v0.12.2 milestone Apr 14, 2020
Copy link
Member

@c42f c42f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems so close, I'd love to get it in the next release but there's one problems still to resolve - the use of zip in the tests seems confusing and might be wrong.

Also, how does this change as a whole impact performance?

Anyway it's great that you've managed to delete half the code from the original implementation. That's what I like to see!

test/eigen.jl Outdated
smallest_normal = floatmin(zero)
largest_subnormal = prevfloat(smallest_normal)
epsilon = eps(1.0)
one_p_epsilon = 1.0 + epsilon
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quibble: Better written nextfloat(1.0) — it's the same in this case, but if you'd written 1.0 - epsilon that's not equal to prevfloat(1.0) which is a bit of a gotcha given that the boundary in floating point discretization density lies on the powers of two.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

test/eigen.jl Outdated
epsilon = eps(1.0)
one_p_epsilon = 1.0 + epsilon
degenerate = (zero, -1, 1, smallest_non_zero, smallest_normal, largest_subnormal, epsilon, one_p_epsilon, -one_p_epsilon)
@testset "2×2 degenerate cases" for (i, j, k) in zip(degenerate,degenerate,degenerate), uplo in (:U, :L)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this zip does what you want because i,j,k are always the same? Though I'm not sure what you want!

Perhaps you meant to use for (i, j, k) in Iterators.product(degenerate,degenerate,degenerate)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're totally right, that's what I meant

@andyferris
Copy link
Member

Where is this one at?

Also, how does this change as a whole impact performance?

Do we have an answer to this one?

@thchr
Copy link
Collaborator

thchr commented Jan 6, 2022

I tested the performance impact of this just now: as far as I can see, there is a speed-up for Hermitian{ComplexF64, ...} matrices but a slow-down for Hermitian{Float64, ...} matrices (not completely clear why to me):

Hermitian{ComplexF64, ...} matrices

Master:

julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
           a = @SMatrix rand(ComplexF64,2,2)
           A = Hermitian(a + a')
       end
BenchmarkTools.Trial: 10000 samples with 940 evaluations.
 Range (min  max):  101.915 ns  324.681 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     102.766 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   105.722 ns ±   9.921 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▅█▁          ▁▃▁  ▁▁                                          ▁
  ███▆▅▄▆▆▆▅▅▅▆███▇█████▇▇▇▆▅▅▅▅▅▅▄▅▅▅▄▅▄▄▅▄▅▄▄▄▅▅▄▄▄▅▅▅▄▄▄▄▃▄▃ █
  102 ns        Histogram: log(frequency) by time        149 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

PR: (speed-up)

julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
           a = @SMatrix rand(ComplexF64,2,2)
           A = Hermitian(a + a')
       end
BenchmarkTools.Trial: 10000 samples with 979 evaluations.
 Range (min  max):  64.147 ns  324.821 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     66.394 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   67.999 ns ±   7.887 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▄▆▆█▂            ▁▂▁▁  ▁                                     ▁
  █████▆▄▅▃▆▅▆▆▆▄▆██████████▇▇▇▇▆▆▅▅▅▅▅▅▄▅▅▅▄▆▄▅▅▄▅▁▄▅▄▅▃▅▅▄▄▄ █
  64.1 ns       Histogram: log(frequency) by time       104 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Hermitian{Float64, ...} matrices

Master:

julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
           a = @SMatrix rand(Float64,2,2)
           A = Hermitian(a + a')
       end
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min  max):  8.909 ns  69.670 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     9.009 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   9.230 ns ±  1.770 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █▄                                                         ▁
  ██▆▄▄▄▄▁▃▄▅▄▃▃▃▄▁▁▁▄▅▇▇▆▅▄▅▄▄▃▄▄▅▄▄▄▁▃▃▁▃▃▁▁▁▃▁▁▁▁▁▃▁▄▃▃▄▅ █
  8.91 ns      Histogram: log(frequency) by time     18.9 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

PR: (slow-down)

julia> @benchmark StaticArrays._eig(Size{(2,2)}(), A, true, true) setup=begin
           a = @SMatrix rand(Float64,2,2)
           A = Hermitian(a + a')
       end
BenchmarkTools.Trial: 10000 samples with 994 evaluations.
 Range (min  max):  29.879 ns  204.930 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     31.489 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   32.438 ns ±   5.469 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▁▆▃ █▃                                                       ▁
  ███████▆▆▅▄▄▄▅▁▃▄▄▅▆▅▆▅▅▄▁▄▅▇█▇█▇▇▆▇▇▆▆▇▅▇▅▇▇▅▅▅▆▆▆█▇▇▆▅▆▆▅▆ █
  29.9 ns       Histogram: log(frequency) by time      51.6 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix linear-algebra numerical-robustness numerical robustness and accuracy
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants