Skip to content

eigvals performs faster for Matrix{ComplexF64} than Matrix{Float64} on Windows #960

@yakovbraver

Description

@yakovbraver

(Cross-posting from Discourse)
I’ve noticed that when diagonalising real symmetric matrices using the default OpenBLAS, eigvals may perform faster if the input matrix is complex, i.e. Matrix{ComplexF64} rather than Matrix{Float64}. Here is my test code:

using LinearAlgebra, BenchmarkTools

n = 50             # matrix dimension
F = rand(n, n)     # a random real Float64 matrix
F += F'            # make `F` symmetric
C = ComplexF64.(F) # a copy of `F` stored as a `Matrix{ComplexF64}`

@benchmark eigvals($F)
@benchmark eigvals($C)

For n = 50, the complex matrix is diagonalised ~5 times faster than the real one:
Screenshot 2022-10-17 141754
For n = 230, both calculations take the same amount of time, and for larger matrices the complex calculation becomes slower than the real one, as expected.
I could reproduce these results on four different machines running Windows 10, while on macOS 10.14.6 the issue is not present (the real calculation performs faster than the complex, as expected). The outputs of versioninfo() are available in a gist.
The issue seems not to appear on Linux either, see Discourse.
When I switch to MKL.jl, the issue does not appear.

Metadata

Metadata

Assignees

No one assigned

    Labels

    external dependenciesInvolves LLVM, OpenBLAS, or other linked librariesperformanceMust go fastersystem:windowsAffects only WindowsupstreamThe issue is with an upstream dependency, e.g. LLVM

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions