-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply connected components decomposition for computing eigenvalues #19355
Conversation
✅ Hi, I am the SymPy bot (v158). I'm here to help you write a release notes entry. Please read the guide on how to write release notes. Your release notes are in good order. Here is what the release notes will look like:
This will be added to https://github.com/sympy/sympy/wiki/Release-Notes-for-1.7. Note: This comment will be updated with the latest check if you edit the pull request. You need to reload the page to see it. Click here to see the pull request description that was parsed.
Update The release notes on the wiki have been updated. |
Codecov Report
@@ Coverage Diff @@
## master #19355 +/- ##
=========================================
Coverage 75.618% 75.619%
=========================================
Files 651 651
Lines 169418 169511 +93
Branches 39973 39998 +25
=========================================
+ Hits 128112 128183 +71
- Misses 35703 35717 +14
- Partials 5603 5611 +8 |
This looks good. I guess the same can be applied to many other functions like det, charpoly, inv, solve, rref etc. Maybe there are a few key places to apply the optimisation so it doesn't need to be explicitly listed everywhere. For example charpoly uses det so doing it for det works for both. Then again eigenvals uses charpoly so maybe it's better to apply this in det rather than in eigenvals. The det could be returned in factored form... |
I have to find some good benchmark examples for this |
How about the one in #16207? Also this method applies to any permutation matrix if the permutation factors. Actually there are better ways to get the eigenvalues of a permutation matrix though: |
Actually, I see it is only just twice faster for millisecons examples.
|
The advantage will be much bigger for larger matrices that break down into much smaller matrices: from sympy import *
from time import time
from random import shuffle
x = Symbol('x')
for m in range(5):
n = 5*2**m
B = randMatrix(2)
M = BlockMatrix([[B[i, j]*eye(n) for i in range(2)] for j in range(2)]).as_explicit()
p = list(range(2*n))
shuffle(p)
Ms = M[p, p]
start = time()
Ms.eigenvals()
print('n = %d, T = %.3gsecs' % (n, time() - start)) With the PR this gives: $ python g.py
n = 5, T = 0.098secs
n = 10, T = 0.0305secs
n = 20, T = 0.0706secs
n = 40, T = 0.144secs
n = 80, T = 0.492secs On master I have $ python g.py
n = 5, T = 0.0771secs
n = 10, T = 0.189secs
n = 20, T = 2.04secs
n = 40, T = 29.5secs
n = 80, T = 368secs
|
Okay, I see the performance differs more greater for larger matrices with smaller block size. |
As long as I have found, I don't think that charpoly actually uses |
I think this is fine to merge if you're done. |
References to other Issues or PRs
Brief description of what is fixed or changed
I think that matrix eigenvalues computations should use connected component decomposition by default because this brings some advantage over polynomials that it can factor out some polynomial terms as much as in matrix form as possible.
However, the converse may not hold true because the companion matrix is connected as a whole, even if the polynomial can be factored.
I've also removed the usage of
EigenOnlyMatrix
because I think that such designs make it difficult to organize matrix methods in this way.I've also expanded the eigenvalue computation routines to
_eigenvals_dict
and_eigenvals_list
.Other comments
Release Notes
Matrix([]).eigenvals(multiple=True)
will give an empty list instead of an empty dict.