Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
Change device_inbox stream index to include user #1793
Conversation
erikjohnston
assigned
NegativeMjark
Jan 10, 2017
erikjohnston
added some commits
Jan 10, 2017
|
Does this index actually help? What does the explain look like for the queries before/after? Is this trying to fix #1768 ? |
|
It does help. New query:
Old query:
|
|
Hmm the only change I see is that it's using an index only scan now, which looks like it's removed a constant factor from the query. |
|
I don't think it's worth it. I suspect the change proposed in #1768 would be a better fix to the query performance. |
Well, its impossible to extrapolate that from two data points, but even so a constant factor of two isn't necessarily something to be sniffed at. Both of those two readings are going to have come from the disk cache, and I've certainly seen that query take a lot longer than a couple of seconds when its not in the disk cache. |
Worth what? The work is done. I'm also concerned how this is going to scale as we roll out e2e. |
|
Yes, but the fix in #1768 will probably knock off a factor ~20 or so from the query and will make it scale better with more e2e. |
|
Oh, sorry, I misread the PR/issue number and thought you were referring to the limit. I find the query in there is a bit dubious, given that it silently relies on there being a unique stream_id per user_id, which is not true for quite a few streams. |
That appears to be because it loads only 100000 rows, rather than e.g. 2313402 rows. (Due to the fact that there are multiple rows per stream id). Is there a reason from the EXPLAIN you think that it would be quicker to do a backwards scan than an index only scan? |
|
Yes because it cheats and only loads 100000 rows rather than 2313402, which i'd have thought is entirely reasonable for something which is just trying to prefill a cache to smooth the startup process. I'm also concerned that adding the user_id to the stream index will makes some of the other queries less efficient, and will result in synapse chewing more diskspace for little benefit. |
|
(obviously you'd need to fiddle the result slightly to get the correct minimum stream_id if you were using the query from #1768) |
Right, but we can achieve the exact same affect by reducing the limit. Which we have. The index only scan has the benefit it doesn't need to pull the rows out of the DB, thus reducing overall IO. So actually I expect adding the user_id to the index is faster for the same number of cache entries.
Other than reducing the number of index rows in a page, it won't make a difference since the user_id is at the end. I have no reason to believe that it makes much of a difference and other─much busier streams─use that style of index.
Keeping the stream tables consistent seems nicer. |
|
Dropping the limit in #1792 doesn't change the fact that the number of rows it needs to scan is proportional to "average number of devices in room" * limit. The change proposed in #1768 puts a hard limit on the number of rows scanned which makes the performance of the query a lot more predictable. Personally I'd rather have predictable performance when running the startup query rather than dropping a constant factor off of it. |
| + */ | ||
| + | ||
| +INSERT into background_updates (update_name, progress_json) | ||
| + VALUES ('device_inbox_stream_index', '{}'); |
NegativeMjark
Jan 12, 2017
Contributor
Add a comment to explain that this is to turn the pre-fill startup query into a index-only scan on postgresql.
erikjohnston commentedJan 10, 2017
•
edited
This makes fetching the most recently changed users much quicker, and
brings it in line with e.g. presence_stream indices.