Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved speed of ctrb and obsv functions #941

Merged
merged 8 commits into from
Dec 1, 2023

Conversation

Jpickard1
Copy link
Contributor

Current implementation of ctrb and obsv functions utilize np.linalg.matrix_power inside of a for i in range(1,n) loop. I modified these functions to reduce the number of matrix multiplications by amat essentially by utilizing the precomputed matrix exponentiation from the i-1the iteration of the loop when computing the ith iteration. This is similar to the MATLAB implementation of the similar functions.

Additionally, I added an optional parameter t to both functions that defaults to None for the case where the reduced observability and controllability matrices are needed. By default, the full observability and controllability matrices are created.

This appears to pass all test cases.

@coveralls
Copy link

coveralls commented Nov 25, 2023

Coverage Status

coverage: 95.009% (+0.007%) from 95.002%
when pulling f8ba0d9 on Jpickard1:main
into f08ad6c on python-control:main.

Copy link
Contributor

@sawyerbfuller sawyerbfuller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution!

A couple of comments:

  1. Have you benchmarked this to confirm it's actually faster? Also, I am not aware of a use case where the obsv or ctrb matrix must be repeatedly evaluated - is there one?
  2. can you add a unit test for obsv and ctrb for the t < n case?
  3. can you define n in the docstrings?

control/statefbk.py Show resolved Hide resolved
control/statefbk.py Show resolved Hide resolved
Jpickard1 and others added 5 commits November 27, 2023 15:04
Co-authored-by: Sawyer Fuller <58706249+sawyerbfuller@users.noreply.github.com>
Co-authored-by: Sawyer Fuller <58706249+sawyerbfuller@users.noreply.github.com>
@Jpickard1
Copy link
Contributor Author

Dear Prof. Fuller and others,

Thank you for the prompt reply.

Concerning: “Have you benchmarked this to confirm it's actually faster? Also, I am not aware of a use case where the obsv or ctrb matrix must be repeatedly evaluated - is there one?”

I have benchmarked the speed of these two methods in this Google Colab notebook which is available here: https://colab.research.google.com/drive/1nDQgmJuQE-Q6H2OZfi4QUstJB44JikHN?usp=sharing. This appears to create a improvement in speed (see the plot below). The notebook creates two functions obsvCurrent and obsvProposed, corresponding to the respective implementations, and clocks their performance for a range of state space dimensions with random A and C matrices with mean time and standard error of 10 iterations shown. This comparison is done only for the observability matrix only, but the controllability matrix should have equivalent performance.

Screenshot 2023-11-27 155852

I agree that there are not many use cases where we need to repeatedly create the controllability or observability matrices, but I have lately been working on a problem of biological sensor selection where I find it beneficial to create many observability matrices. In particular, I am applying several sensor selection algorithms to design my output matrix C, and then I am interested in constructing the observability matrix for each output matrix C in order to follow a method proposed by Hasnain et. al. 2023, which also motivates the addition of t as a time-limiting parameter for these particular matrices. Still, even if the constructions of the controllability and observability matrices are typically one off calculations, this should still be faster.

Regarding: “can you add a unit test for obsv and ctrb for the t < n case?”

I have added the test cases testObsvT and testCtrbT to python-control/control/tests/statefbk_test.py

Concerning: “can you define n in the docstrings?”

I have removed n from the docstring and replaced it with A.shape[0] to indicate the dimension of the state space model. Apologies if I have missed it somewhere in the current documentation, but is there a convention/letter used throughout the documentation to indicate what the dimension of the model is? If so, then I will use that convention rather than A.shape[0].

The addition of this parameter may seem unnecessary, but it is relevant to a method I am currently using. Since it doesn't break anything to add t defaulting to None as an argument, as a user of this code I would find it a nice/helpful addition, but also if my use case is a one-off task and not in line with the actual goal/most common use cases of your code, I can remove the optional argument. Please let me know what you think is best for this.

Please let me know what else! Thank you very much. Best,
Joshua Pickard

@ilayn
Copy link

ilayn commented Nov 27, 2023

This is actually not only for performance but for accuracy reasons, the right way to go. Hence it is an improvement anyways. It is basically a Krylov space argument that leads to repeated powers of A and not that the powers have to be calculated (which is for some reason, always skipped though makes the exposition very accessible).

However, this test is fundamentally broken for production code; i) it is only a theoretical tool since the powers blow up regardless ii) rank decision is a very tricky thing to nail it down without false positives and false negatives iii) as if it is not enough, the answer is a binary yes or no. It does not give any indication of how close we are dancing to uncontrollability/unobservability.

A typical and best bang for buck candidate is D. Boley's test if you are really looking for a better test which computes the cancellation distance in our context (but originally checks for the rank deficiency metric) Then you can put a limit on the distance and you can tie it to the matrix norms and so on. In my opinion a much better candidate instead of a lower semi-continuous function.

I have the regular version here in case you want to check it out. The article is in the docstring.

I hope academia emphasizes a bit more that the Kalman test is just a theoretical tool and we use it just on the homework problems.

@Jpickard1
Copy link
Contributor Author

Dear Ilhan,

Thank you. I agree there are several issues with utilizing the Kalman test for determining controllability or observability, and thank you for sharing the test by Boley.

My particular motivation for this change is to use Gramians to determine observability. This is advantageous to me because in addition to providing a Yes/No test, the Gramian provides information on the principal directions of controllability/observability. Since discrete, finite horizon Gramians can be computed in terms of obsv(A,C,t=T), it seems like adding t as an argument would be a nice functionality to include in the code.

While there are these limitations that you have noted, it seems to me that we are in agreement this is a beneficial argument to add for the ctrb and obsv functions. Please share if you think I am misunderstanding your points above or if there are other changes that need to be made.

Thank you. Best,
Joshua Pickard

@Jpickard1
Copy link
Contributor Author

Dear all,

It appears that this is failing with the below error related to Doctest / doctest-linux (pull_request), and I am hoping I could have some pointers to fix this. This error appears to occur surrounding the doctest for control.forced_response, which is a section of code that I do not believe was modified on my fork or for this pull request. I am not familiar with the doctests for this repo, so could you please advise if there are changes that I should make on my end to resolve this error or if the error exist more broadly in the repo.

Thank you. Best,
Joshua Pickard

ERROR: [numpydoc] While processing docstring for 'control.forced_response'

Extension error (numpydoc.numpydoc):
Handler <function mangle_docstrings at 0x7fa850237ec0> for event 'autodoc-process-docstring' threw an exception (exception: Expected a class or None, but got <function forced_response at 0x7fa853992ac0>)
make: *** [Makefile:32: html] Error 2
reading sources... [ 26%] generated/control.forced_response
Error: Process completed with exit code 2.

@bnavigator
Copy link
Contributor

It's failing in parts of the code unaffected by your change, so don't worry about in this PR.

@bnavigator bnavigator mentioned this pull request Dec 1, 2023
@sawyerbfuller
Copy link
Contributor

I tried re-running the failing test once or twice but it kept failing. I agree that it has nothing to do with this PR, so I'll merge it.

@sawyerbfuller sawyerbfuller merged commit 82f3fe3 into python-control:main Dec 1, 2023
13 of 14 checks passed
@murrayrm murrayrm added this to the 0.10.0 milestone Mar 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants