-
-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up LB tests #2124
Speed up LB tests #2124
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At first I was skeptical about removing all the sql files, but we never really invoke them by hand since we have manage.py script for everything, so this looks great!
@@ -0,0 +1,9 @@ | |||
BEGIN; | |||
|
|||
DELETE FROM listen CASCADE; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think TRUNCATE is faster than DELETE FROM ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only for large tables, it seems:
I guess in the context of tests it will always be small tables, so I guess this is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup right. TRUNCATE is faster for large tables but DELETE for small one and tables in tests only have a handful of rows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, The first solution I had tried was to use TRUNCATE to replace CREATE/DROP and found that there is noticeable speed improvement. Disappointed by the results, I formed the belief that we shouldn't do any table clearing at all in the tests and move it all transactions and reverse those transactions in ROLLBACK. That was the approach I tried out in #2094 and it produced great speedups.
However, the changes there are too much so I was on the lookout for a less intrusive way. Randomly I tried DELETE FROM and observed the speed up. If only I had found the blog post you linked above earlier!!
DELETE FROM speeds up the test to a good extent already, but we can still try using transactional tests later if those provide further speed up or other reasons.
add begin/commits
Each test now deletes the data in the table without dropping and creating tables and sequences etc. Therefore, the id's will not always be equal to metabrainz_row_id. To fix the tests use the user id from the row returned when the user is created in the db.
eafd0eb
to
bd83b9f
Compare
Hello @amCap1712! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2022-09-07 04:56:39 UTC |
ac6dcb9
to
19734b9
Compare
To speed up LB tests: instead of creating and dropping tables after each test, delete all the data from the tables. This patch approximately halves the overall runtime for tests.
Other than that, the changes in the PR are to update tests because the various autoincrement sequences are not reset between tests. To fix the tests use the id from the row returned when the data is inserted in the db.