-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Break apart & speed up integration tests. #320
Comments
Thanks! Yeah, some of these tests are slower than necessary and can be improved with just a bit of work. Would be good to clean it up a bit. |
New test times, out of a total of 144.75s:
For data fixtures, that's |
@znicolaou - I want to ask about some of the longest tests remaining:
Similar questions for the 3D and 5D [Weak] PDEs. Currently the tests take around 50 sec to run. I'll work on Current status on the 8.11s setup test/test_feature_library.py::test_parameterized_library
7.14s call test/test_feature_library.py::test_5D_pdes
5.17s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params5]
3.76s call test/test_feature_library.py::test_3D_weak_pdes
2.01s call test/test_optimizers.py::test_trapping_inequality_constraints[params1]
1.95s call test/test_optimizers.py::test_trapping_inequality_constraints[params2]
1.63s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params7]
1.32s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params3]
1.26s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params6]
1.12s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params4]
1.09s call test/test_optimizers.py::test_sample_weight_optimizers[StableLinearSR3]
0.98s call test/test_optimizers.py::test_trapping_inequality_constraints[params0]
0.91s call test/test_optimizers.py::test_trapping_inequality_constraints[params3]
0.79s call test/test_pysindy.py::test_multiple_trajectories_and_ensemble
0.75s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params2]
0.68s call test/test_optimizers.py::test_optimizers_verbose_cvxpy[StableLinearSR3]
0.67s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params1]
0.63s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params0]
0.58s call test/test_feature_library.py::test_3D_pdes
0.58s call test/test_feature_library.py::test_1D_pdes
0.56s call test/test_optimizers.py::test_optimizers_verbose[StableLinearSR3]
0.56s call test/test_feature_library.py::test_5D_weak_pdes
0.53s call test/test_differentiation.py::test_centered_difference_noaxis_vs_axis |
Hey @Jacob-Stevens-Haas , I've been swamped the last few weeks, but hoping I can find some time to think about this soon...I think it should be possible to make faster tests, but it just takes time to implement. For the basic tests, we don't really need to integrate anything, we can just use random data instead. But I'm not so sure that the tests are really testing anything anyways--I don't think I've ever seen a problem identified from the unit tests. |
Part of the reason you haven't seen a problem identified in unit tests is because we have so few true "unit" tests. Mostly, the integration tests stand in for unit tests. For example, when I made big changes last year, they caught things like the arrangement of the axes in data, which doesn't really require an integration test. But given that integration tests exist now and are better than nothing, a separate consideration is whether they should be solving a known problem to a (probably arbitrary) degree of accuracy, or whether they should just test that the code executes without error.
EDIT1: I was able cut the
EDIT2: I coarsened those integrations, removed ensembling from |
With the pending PR, this is the final time trial:
While there are still a bunch of tests that could be deleted or simplified (I'm giving |
Thanks, Jake! Totally agree that tests for the actual functionality would be useful. Nice that coarsening helped. Your second consideration is definitely important, I think. Just running vs calculating the right values vs actually fitting data well are all possible, but just running is probably too weak IMO. FYI, the 1d, 2d, 5d thing is a bit of a legacy--the old PDE library hand coded different cases for different dimensions, but in the new library, if something works for 2d, it should work in all cases. Anyway, sorry I still haven't gotten to looking into details; I'm at a conference now and had to rush on a paper submission before leaving. |
Ah, good to know about the PDE library legacy tests! Agree that "just running" is probably too weak. At the same time, I think that testing a whole problem, rather than just the library's functionality, is too expansive - it depends upon a particular SINDy parameters, rather than just library parameters, and fixing any potential bug in the library can cause the integration test to fail. I think the ideal tests stay limited to just the library. E.g. that WeakSINDy can calculate the integral of a known function multiplied by the test function over a single domain cell. And a separate test for selecting domain cells. |
I've been mulling around and proposing a lot of changes, and all of them will require refactoring. Refactoring cycles tend to involve running a lot of tests. On that note, here's all the tests that a long time.
pysindy.optimizers
) and shouldn't need to build a full SINDy model. Some may be improved by shortening the simulation data.if __name__ == "testing"
params4
) so it could be read from test output.asv
The text was updated successfully, but these errors were encountered: