You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When working on issue #438, I had the most improvement when looking at the slowest test classes instead of the slowest test methods. Converting many related tests to mock tests gave me the most improvement per hour of coding.
The other option is the technique we used in issues #243 and #296, using a test fixture was much faster than running pipelines during the setup.
When you're finished optimizing tests, delete the XML files, and set the unit test runner back to the regular Django version.
We successfully squashed the migrations in issue #438, but we may need to do it again after a few releases.
Some of the test set up can't be converted to fixtures, like creating a sandbox. For that kind of set up, it might be faster if we monkey patched some of the clean() methods to do nothing. Alternatively, a class property could disable the clean() methods.
It's possible that some of the slowest tests are just repeating other tests. We could try using coverage reports to compare runs with and without one of the slowest tests. If a test isn't covering anything unique, remove it. If it isn't covering much, replace it with mock tests.
identify slow tests
speed up worst ones
disable clean()
try running Javascript tests using phantomjs on TravisCI
The text was updated successfully, but these errors were encountered:
While the Django mock queries are useful for simple queries, they are not a good fit for complex queries. SQLite also has some differences from PostgreSQL. Try to optimise PostgreSQL for unit tests to see if we can reduce the performance cost and gain the reduced complexity of just using PostgreSQL everywhere.
Since removing support for the old pipelines, the test suite runs in under a minute, and the whole continuous integration build runs in under four minutes. Closing this issue.
When working on issue #438, I had the most improvement when looking at the slowest test classes instead of the slowest test methods. Converting many related tests to mock tests gave me the most improvement per hour of coding.
The other option is the technique we used in issues #243 and #296, using a test fixture was much faster than running pipelines during the setup.
To find the slowest tests, install unittest-xml-reporting with pip and then configure Django to use it. Run the test suite, and then run the
Kive/utils/slow_test_report.py
script to find the slowest tests.To see where the time is taken in a test, run it with the Python profiler:
Another option is to install the gprof2dot package with pip. Then you can generate a call graph:
When you're finished optimizing tests, delete the XML files, and set the unit test runner back to the regular Django version.
We successfully squashed the migrations in issue #438, but we may need to do it again after a few releases.
Some of the test set up can't be converted to fixtures, like creating a sandbox. For that kind of set up, it might be faster if we monkey patched some of the
clean()
methods to do nothing. Alternatively, a class property could disable theclean()
methods.It's possible that some of the slowest tests are just repeating other tests. We could try using coverage reports to compare runs with and without one of the slowest tests. If a test isn't covering anything unique, remove it. If it isn't covering much, replace it with mock tests.
clean()
The text was updated successfully, but these errors were encountered: