-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test improvements today #334
Conversation
5 and 17 are culprits. After the fast FiniteDifference is moved from pysindy to derivative, 5 will be faster. Also gitignore output files from 17.
We don't need to test legacy ensembling behavior for every optimizer, since the ensembling part doesn't depend upon the inner optimizer. We also don't need 10x10 bagging x library_bagging, we just need 2x2 to test functionality. Old test durations: 4.82s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[StableLinearSR3] 1.69s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[MIOSR] 0.36s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[TrappingSR3] 0.34s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[FROLS] 0.11s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[SSR] 0.04s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[ConstrainedSR3] 0.04s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[SR3] 0.04s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes[STLSQ] New test duration: 0.02s call test/optimizers/test_optimizers.py::test_legacy_ensemble_odes
Also, make test/differentiation/test_differentiation_methods.py just test/test_differentiation.py. Unneccessary nesting. Previously, test_nan_derivatives asked whether the model fit by SINDy when FiniteDifference gave NaN endpointswas close to the model fit without nans. This is an integration test that is really asking whether FiniteDifference is calculating derivatives in a consistent with drop_endpoints=True and False. Also, a bit faster but this test was fast to begin with.
Previously, test_complexity and test_coefficients just tested that - required names are present in SINDy object - when fit on some amount of data, less than 10 terms were nonzero - These are essentially the same test, since model.complexity is just a wrapper around it's optimizer's complexity property But Lorenz has 7 nonzero terms, and when I changed the data to speed up tests, data had a few more nonzeros.
We don't need 500 timepoints; shortening to 50. Also, data_sindypi_library and data_pde_library had an implicit dependence upon data_lorenz; this commit makes that dependence explicit.
A lot of test files were nested under a directory with just one file. It's now one less click to view test results or files.
Codecov ReportPatch coverage has no change and project coverage change:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## master #334 +/- ##
==========================================
- Coverage 92.31% 91.91% -0.41%
==========================================
Files 37 37
Lines 3747 3747
==========================================
- Hits 3459 3444 -15
- Misses 288 303 +15 ☔ View full report in Codecov by Sentry. |
@Jacob-Stevens-Haas hoping to find some time to review this and the other PRs in the next week or two! |
It looks like what If that's the case, rather than simulating Lorenz, training a model, and creating an interp function, I'll just mock up a model with given coefficients, maybe choose simpler dynamics, and pass a fast callable. Looks like cutting the number of integrations in half, a simpler ODE to integrate, and a reduced number of time steps down Profiling results
|
db9e381
to
7be9a4f
Compare
I'm going to go ahead an merge this PR so that I can troubleshoot the codecov issue, which is a lot easier with faster tests. If anyone needs me to undo it, I can. |
Thumbs up! FYI, I am going to push an update to notebook 17 in the next few days as I finalize some revisions for a resubmission. I made some minor changes to allow the notebook test to end early so that I don't have to construct a huge number of dummy data sets for all the various tests, which I think is the best solution to make the test reasonable. You also asked about the plt.ion in the other pull request a while back -- I had included that to stop the test from hanging with plt.show() on my system until the plot is closed, but feel free to remove if it works without it. |
Test improvements today
Beginning to work on #320. A variety of changes to speed up testing, from around 370 to 154 seconds
data_lorenz
from 500 data points to 50)I'll continue with in a new PR next week or after a short break. The following test accounts for one third of the remaining test time: