Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Change device_inbox stream index to include user #1793

Merged
merged 4 commits into from Jan 13, 2017

Conversation

Projects
None yet
2 participants
Owner

erikjohnston commented Jan 10, 2017 edited

This makes fetching the most recently changed users much quicker, and
brings it in line with e.g. presence_stream indices.

Change device_inbox stream index to include user
This makes fetching the nost recently changed users much tricker, and
brings it in line with e.g. presence_stream indices.

erikjohnston added some commits Jan 10, 2017

Contributor

NegativeMjark commented Jan 10, 2017

Does this index actually help? What does the explain look like for the queries before/after?

Is this trying to fix #1768 ?

Owner

erikjohnston commented Jan 10, 2017

It does help.

New query:

matrix=# explain analyze SELECT user_id, MAX(stream_id) FROM device_inbox WHERE stream_id > 291509 - 100000 GROUP BY user_id;
                                                                           QUERY PLAN                                                                            
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=118140.66..118147.50 rows=684 width=27) (actual time=620.251..621.149 rows=5270 loops=1)
   Group Key: user_id
   ->  Index Only Scan using device_cache_test_2 on device_inbox  (cost=0.56..106708.76 rows=2286381 width=27) (actual time=0.050..230.960 rows=2311840 loops=1)
         Index Cond: (stream_id > 191509)
         Heap Fetches: 24067
 Planning time: 0.464 ms
 Execution time: 621.318 ms

Old query:

matrix=# explain analyze SELECT user_id, MAX(stream_id) FROM device_inbox WHERE stream_id > 291509 - 100000 GROUP BY user_id;
                                                                          QUERY PLAN                                                                           
---------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=292078.26..292085.10 rows=684 width=27) (actual time=1251.428..1252.534 rows=5270 loops=1)
   Group Key: user_id
   ->  Index Scan using device_inbox_stream_id on device_inbox  (cost=0.43..280646.35 rows=2286381 width=27) (actual time=0.032..760.074 rows=2311840 loops=1)
         Index Cond: (stream_id > 191509)
 Planning time: 0.085 ms
 Execution time: 1252.684 ms
Contributor

NegativeMjark commented Jan 10, 2017

Hmm the only change I see is that it's using an index only scan now, which looks like it's removed a constant factor from the query.

Contributor

NegativeMjark commented Jan 10, 2017

I don't think it's worth it. I suspect the change proposed in #1768 would be a better fix to the query performance.

Owner

erikjohnston commented Jan 10, 2017

Hmm the only change I see is that it's using an index only scan now, which looks like it's removed a constant factor from the query.

Well, its impossible to extrapolate that from two data points, but even so a constant factor of two isn't necessarily something to be sniffed at. Both of those two readings are going to have come from the disk cache, and I've certainly seen that query take a lot longer than a couple of seconds when its not in the disk cache.

Owner

erikjohnston commented Jan 10, 2017

I don't think it's worth it. I suspect the change proposed in #1768 would be a better fix to the query performance.

Worth what? The work is done. I'm also concerned how this is going to scale as we roll out e2e.

Contributor

NegativeMjark commented Jan 10, 2017 edited

Yes, but the fix in #1768 will probably knock off a factor ~20 or so from the query and will make it scale better with more e2e.

Owner

erikjohnston commented Jan 10, 2017

Oh, sorry, I misread the PR/issue number and thought you were referring to the limit.

I find the query in there is a bit dubious, given that it silently relies on there being a unique stream_id per user_id, which is not true for quite a few streams.

Owner

erikjohnston commented Jan 10, 2017

Yes, but the fix in #1768 will probably knock off a factor ~20 or so from the query and will make it scale better with e2e.

That appears to be because it loads only 100000 rows, rather than e.g. 2313402 rows. (Due to the fact that there are multiple rows per stream id).

Is there a reason from the EXPLAIN you think that it would be quicker to do a backwards scan than an index only scan?

Contributor

NegativeMjark commented Jan 10, 2017

Yes because it cheats and only loads 100000 rows rather than 2313402, which i'd have thought is entirely reasonable for something which is just trying to prefill a cache to smooth the startup process.

I'm also concerned that adding the user_id to the stream index will makes some of the other queries less efficient, and will result in synapse chewing more diskspace for little benefit.

Contributor

NegativeMjark commented Jan 10, 2017

(obviously you'd need to fiddle the result slightly to get the correct minimum stream_id if you were using the query from #1768)

Owner

erikjohnston commented Jan 10, 2017

Yes because it cheats and only loads 100000 rows rather than 2313402, which i'd have thought is entirely reasonable for something which is just trying to prefill a cache to smooth the startup process.

Right, but we can achieve the exact same affect by reducing the limit. Which we have. The index only scan has the benefit it doesn't need to pull the rows out of the DB, thus reducing overall IO. So actually I expect adding the user_id to the index is faster for the same number of cache entries.

I'm also concerned that adding the user_id to the stream index will makes some of the other queries less efficient, and will result in synapse chewing more diskspace for little benefit.

Other than reducing the number of index rows in a page, it won't make a difference since the user_id is at the end. I have no reason to believe that it makes much of a difference and other─much busier streams─use that style of index.

(obviously you'd need to fiddle the result slightly to get the correct minimum stream_id if you were using the query from #1768)

Keeping the stream tables consistent seems nicer.

Contributor

NegativeMjark commented Jan 10, 2017

Dropping the limit in #1792 doesn't change the fact that the number of rows it needs to scan is proportional to "average number of devices in room" * limit. The change proposed in #1768 puts a hard limit on the number of rows scanned which makes the performance of the query a lot more predictable.

Personally I'd rather have predictable performance when running the startup query rather than dropping a constant factor off of it.

+ */
+
+INSERT into background_updates (update_name, progress_json)
+ VALUES ('device_inbox_stream_index', '{}');
@NegativeMjark

NegativeMjark Jan 12, 2017

Contributor

Add a comment to explain that this is to turn the pre-fill startup query into a index-only scan on postgresql.

@erikjohnston erikjohnston merged commit f0325a9 into develop Jan 13, 2017

0 of 5 checks passed

Sytest Dendron (Merged PR) Build triggered. sha1 is merged.
Details
Sytest Postgres (Merged PR) Build triggered. sha1 is merged.
Details
Sytest SQLite (Merged PR) Build triggered. sha1 is merged.
Details
continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
continuous-integration/travis-ci/push The Travis CI build is in progress
Details

@erikjohnston erikjohnston deleted the erikj/change_device_inbox_index branch Mar 29, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment