-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addressing #84 and #85 #91
Conversation
Also changed the check for matrix singularity
Kudos for finding |
Thanks for the hint. Indeed, an additional SVD seems to be superfluous. However, it can be really helpful. Consider this code: import numpy as np
import matplotlib.pyplot as plt
matrix_size = 3
for spread in [5, 10, 20]:
A1 = np.random.rand(matrix_size, matrix_size)
A2 = np.diag(np.logspace(-spread, +spread, matrix_size))
A3 = np.random.rand(matrix_size, matrix_size)
A = A1@A2@A3
invA_A = np.linalg.solve(A, A)
message = "Matrix size: {:d}, matrix rank: {:d}"
print(message.format(matrix_size, np.linalg.matrix_rank(A)))
# Should be an identity matrix
plt.figure()
plt.imshow(np.log10(np.absolute(invA_A)), interpolation="none")
plt.colorbar() So instead of raising an exception,
Hence, numerical rank loss cannot be detected in |
Thanks for the detailed explanation. I assumed NumPy/SciPy would emulate Matlab with its warnings about large condition numbers, using |
Oh, I didn't knew about this issue, thanks! But it doesn't seem they're going to do anything about this behaviour. Anyway, most technical systems have very few (in terms of linear algebra 😉) inputs/outputs. I think this additional SVD-based rank check will makes things slower only if there are at least 1000 inputs/outputs (we're inverting |
#101 Changes are from branch `master` of https://github.com/mp4096/python-control.git There was merge conflict in how a for-loop was refactored into `map` (here) vs. list comprehension (from PR #110). I compared the two alternatives using %timeit of Jupyter for matrices that would correspond to LTI systems with 10 state dimensions, 2 inputs, 2 outputs (so, the A matrix has shape (10, 10), B matrix has shape (10,2), etc.), and with 100 state dimensions, 20 inputs, 20 outputs, all using matrices from numpy.random.random((r,c)). The difference in timing performance does not appear significant. However, the case of `map` was slightly faster (approximately 500 to 900 ns less in duration), so I decided to use that one to resolve the merge conflict.
numpy.linalg.eigvals
as suggested in Calculate poles more smarter for state-space systems #84numpy.linalg.solve
as suggested in Avoid using inv when not necessary #85numpy.linalg.matrix_rank
uses a reasonable numerical threshold, so that we don't need to checkabs(det(F))
by hand. Determinant is actually very sensitive to matrix scaling, so better not use it. Both the determinant and the SVD required formatrix_rank
are cubic, so there should be no slowdown.