Conversation
0515443 to
c165de8
Compare
In order to reduce the runtime of the shared tests we reduce the number of rows in the tables used for the shared tests by increasing the interval between the time samples. This reduce the number of rows without reducing the upper and lower ranges. This changes the number of rows in the plans, so we have to update the tests accordingly.
c165de8 to
c9c4eca
Compare
|
@antekresic, @erimatnor: please review this pull request.
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #7591 +/- ##
==========================================
+ Coverage 80.06% 82.29% +2.22%
==========================================
Files 190 238 +48
Lines 37181 43745 +6564
Branches 9450 10981 +1531
==========================================
+ Hits 29770 35999 +6229
- Misses 2997 3401 +404
+ Partials 4414 4345 -69 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
svenklemm
left a comment
There was a problem hiding this comment.
I dont think we should adjust the global shared database. This set is used for many tests and only generated once per ci run and intentionally a bit bigger. If this set is too large for this specific test we should create a test local small table for these tests. Keep in mind though that the shared set also has other attributes e.g. chunks with different physical layouts,
I am not sure we need a bigger size. The changes involves "thinning out" the samples so that there are fewer samples but the ranges are the same in all tests, which should keep the "physical layout" of the chunks (the chunk time intervals and the number of chunks). Looking at the changes, I see mostly changes in the rows count, but there are some changes from Merge Join to Nested Loop Joins as well. There is one case where a timestamp was an even multiple of 2, which I changed to an even multiple of 10, and it seems like the result is the same. |
|
In my previous attempt to make the shared tests less flaky, I actually had to increase the table sizes to get meaningful plans: #6550 |
For the reference, here are the 15 slowest tests. Only |
|
I don't see a way forward here. Unless @mkindahl insists, I'm for closing this one. |
Let's close it. |
In order to reduce the runtime of the shared tests we reduce the number of rows in the tables used for the shared tests by increasing the interval between the time samples. This reduce the number of rows without reducing the upper and lower ranges. This changes the number of rows in the plans, so we have to update the tests accordingly.
Note: there is a lot of test changes, but they are just reducing the number of rows scanned. Important is that the new plans do not end up with zero rows where there should be some. Note that in some cases, there will be zero rows scanned for an Append node, but the corresponding ChunkAppend node is then not scanned at all, so that matches.
Disable-check: force-changelog-file