-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Dynamically adjust postgres feature queue size
Lower the default queue size, but automatically adjust it based on how long each fetch takes. This change keeps fetching responsive even for slow connections or databases. The current approach with a fixed queue size can result in very slow feature fetching, which prevents UI updates for multiple seconds.
- Loading branch information
1 parent
651b5c6
commit fbe4be8
Showing
2 changed files
with
14 additions
and
7 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
fbe4be8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... This will take around 12 roundtrips before 2000 features are requested as before this patch.
I really think this value should be cached somewhere (connection pool?) so it doesn't need to be re-estimated for each iterator.
fbe4be8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't work out a reasonable place to cache this. It really needs to be in the provider, but we can't set a value in the provider from an iterator.
Re the 12 trips to get to 2000 - that's why I tested this on the best case scenario where large queue sizes are mostly likely to benefit. At least on large datasets there's no regression here...
fbe4be8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the connection pool? I think that's a quite good place with a QHash that has
connInfo
+sql
as key.I don't think a benchmark on a local database should be treated as representative (latency < 1ms) ;)
I'm even wondering if this will not negatively impact situations where the speed (as in volume) is good, but the latency is bad. They will stick at 1 feature / request, although they would benefit a lot from bigger queue sizes.
I think it would be great to have the possibility to show the queue size next to the measured time in the message log (like the rendering times can be shown) to get a better overview of real world examples.
fbe4be8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry, I should stress that that benchmark was done as a way to test the best case which would benefit from a large queue. I'll be testing on a super slow server over the coming week to verify there's no regressions there either. At least from a using QGIS POV the change is day and night for the slow database.
fbe4be8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it helps in your scenario, that's a really good start and worth it! I guess for 8% it's better and for 88% it's the same. What I wonder is the other 4% of the users.
What I'm trying to say is that there are different reasons for a connection being perceived as slow (latency vs bandwidth).
I think we should get some good benchmark results to tune min time / max time / default queue size - and decide if this is always good or if a queue size override needs to be available.