-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Travis CI sporadically fails in control.tests.convert_test.TestConvert #194
Comments
Two reasons I can think of for different results given the same inputs are:
However, neither seems all that likely. Besides that, for the assertion to be triggered (BTW, shouldn't that be If this is due to round-off, why not dodge the problem with |
running: gives the following error message:
|
@stanmoon it would be great if we could reproduce this error. Could you let us know your CPU, OS and version, and also Python, Numpy, Scipy, and python-control versions? If you need help figuring any of that out, let us know. |
In case this was due to Scipy or Numpy updates I just retried this, but I can't reproduce it on my system, Ubuntu 18.04 64-bit running on a Intel i5-5200U. I tried the following two setups:
This probably uses "plain old" BLAS and LAPACK for linear algebra. Not sure if
This had MKL installed (it was a default Conda setup). |
* replaced the _common_den function internals. Passes tests * make xferfcn.py python2 compatible again * working tf -> ss transformation now, also solved #111 * disabled MIMO test when no slycot * do not cancel pole/zero pairs before calculating pole() in xferfcn.py - for the above reason, do conversion on minreal'd xferfcn in statesp.py - add a test for not canceling pole/zero pairs when calculating pole() - add import of matlab in discrete_test.py * change testModred; that one did state removal on a system of which the selection of states was automatic
Fixed in PR #206. |
Travis CI tests are failing in semi-random ways and generating the following error:
The error is not always present and re-running the build sometimes fixes it. It appears the problem may be due to the way that random test cases are generated (although the seed is fixed in the unit test => not sure how we are getting different results on rebuild).
The text was updated successfully, but these errors were encountered: