Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addressing #84 and #85 #91

Merged
merged 3 commits into from
May 30, 2016
Merged

Addressing #84 and #85 #91

merged 3 commits into from
May 30, 2016

Conversation

mp4096
Copy link
Contributor

@mp4096 mp4096 commented May 20, 2016

  • Compute poles of a state-space system using numpy.linalg.eigvals as suggested in Calculate poles more smarter for state-space systems #84
  • Replaced matrix inversion in the feedback computation by numpy.linalg.solve as suggested in Avoid using inv when not necessary #85
  • Also replaced the singularity check with a more stable one. numpy.linalg.matrix_rank uses a reasonable numerical threshold, so that we don't need to check abs(det(F)) by hand. Determinant is actually very sensitive to matrix scaling, so better not use it. Both the determinant and the SVD required for matrix_rank are cubic, so there should be no slowdown.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.009%) to 74.042% when pulling 6aee194 on mp4096:master into 0b26e06 on python-control:master.

@mp4096
Copy link
Contributor Author

mp4096 commented May 20, 2016

Travis fails exactly at the same test as master, also see #70

@pmli
Copy link

pmli commented May 24, 2016

Kudos for finding abs(det(F)). Determinant gives no information about the distance to singularity, therefore it cannot be used to numerically check for singularity. SVD is definitely much better. But, wouldn't it be even better to drop the singularity check and trust numpy.linalg.solve to raise LinAlgError if singularity is detected? Do you have a use case when numpy.linalg.matrix_rank detects singularity, but numpy.linalg.solve doesn't?

@mp4096
Copy link
Contributor Author

mp4096 commented May 25, 2016

Thanks for the hint. Indeed, an additional SVD seems to be superfluous. However, it can be really helpful.

Consider this code:

import numpy as np
import matplotlib.pyplot as plt

matrix_size = 3

for spread in [5, 10, 20]:
    A1 = np.random.rand(matrix_size, matrix_size)
    A2 = np.diag(np.logspace(-spread, +spread, matrix_size))
    A3 = np.random.rand(matrix_size, matrix_size)

    A = A1@A2@A3   

    invA_A = np.linalg.solve(A, A)

    message = "Matrix size: {:d}, matrix rank: {:d}"    

    print(message.format(matrix_size, np.linalg.matrix_rank(A)))

    # Should be an identity matrix
    plt.figure()
    plt.imshow(np.log10(np.absolute(invA_A)), interpolation="none")
    plt.colorbar()

So instead of raising an exception, numpy.linalg.solve just returns some numerical garbage. The problem is, numpy.linalg.solve relies on LAPACK's DGESV, which in turn calls DGETRF to perform an LUP decomposition. DGETRF raises an exception only if a diagonal element is exactly zero:

INFO is INTEGER
= 0:  successful exit
< 0:  if INFO = -i, the i-th argument had an illegal value
> 0:  if INFO = i, U(i,i) is exactly zero. The factorization
    has been completed, but the factor U is exactly
    singular, and division by zero will occur if it is used
    to solve a system of equations.

Hence, numerical rank loss cannot be detected in numpy.linalg.solve

@pmli
Copy link

pmli commented May 29, 2016

Thanks for the detailed explanation. I assumed NumPy/SciPy would emulate Matlab with its warnings about large condition numbers, using DGESVX. At least, there is an open NumPy issue concerning exactly this, although it is quite old. Until it is closed, I don't see a way to cheaply estimate the condition number in NumPy which could replace SVD.

@mp4096
Copy link
Contributor Author

mp4096 commented May 29, 2016

Oh, I didn't knew about this issue, thanks! But it doesn't seem they're going to do anything about this behaviour.

Anyway, most technical systems have very few (in terms of linear algebra 😉) inputs/outputs. I think this additional SVD-based rank check will makes things slower only if there are at least 1000 inputs/outputs (we're inverting D here).

@murrayrm murrayrm merged commit 6dfd5a5 into python-control:master May 30, 2016
@mp4096
Copy link
Contributor Author

mp4096 commented May 30, 2016

Thank you! I think one can close #84 and #85 now.

slivingston added a commit that referenced this pull request Dec 31, 2016
#101

Changes are from branch `master` of
https://github.com/mp4096/python-control.git

There was merge conflict in how a for-loop was refactored into
`map` (here) vs. list comprehension (from PR #110).

I compared the two alternatives using %timeit of Jupyter for matrices
that would correspond to LTI systems with 10 state dimensions, 2
inputs, 2 outputs (so, the A matrix has shape (10, 10), B matrix has
shape (10,2), etc.), and with 100 state dimensions, 20 inputs,
20 outputs, all using matrices from numpy.random.random((r,c)).

The difference in timing performance does not appear
significant. However, the case of `map` was slightly faster
(approximately 500 to 900 ns less in duration), so I decided to use
that one to resolve the merge conflict.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants