Skip to content
Permalink
Browse files

Dynamically adjust postgres feature queue size

Lower the default queue size, but automatically adjust it
based on how long each fetch takes. This change keeps fetching
responsive even for slow connections or databases. The current
approach with a fixed queue size can result in very slow feature
fetching, which prevents UI updates for multiple seconds.
  • Loading branch information
nyalldawson committed Mar 4, 2017
1 parent 651b5c6 commit fbe4be884bd0ae05235464d613cf87406ce1ca31
@@ -22,15 +22,12 @@
#include "qgsmessagelog.h"
#include "qgssettings.h"

#include <QElapsedTimer>
#include <QObject>


const int QgsPostgresFeatureIterator::FEATURE_QUEUE_SIZE = 2000;


QgsPostgresFeatureIterator::QgsPostgresFeatureIterator( QgsPostgresFeatureSource *source, bool ownSource, const QgsFeatureRequest &request )
: QgsAbstractFeatureIteratorFromSource<QgsPostgresFeatureSource>( source, ownSource, request )
, mFeatureQueueSize( FEATURE_QUEUE_SIZE )
, mFeatureQueueSize( 1 )
, mFetched( 0 )
, mFetchGeometry( false )
, mExpressionCompiled( false )
@@ -223,6 +220,9 @@ bool QgsPostgresFeatureIterator::fetchFeature( QgsFeature &feature )

if ( mFeatureQueue.empty() && !mLastFetch )
{
QElapsedTimer timer;
timer.start();

QString fetch = QStringLiteral( "FETCH FORWARD %1 FROM %2" ).arg( mFeatureQueueSize ).arg( mCursorName );
QgsDebugMsgLevel( QString( "fetching %1 features." ).arg( mFeatureQueueSize ), 4 );

@@ -258,6 +258,15 @@ bool QgsPostgresFeatureIterator::fetchFeature( QgsFeature &feature )
} // for each row in queue
}
unlock();

if ( timer.elapsed() > 500 && mFeatureQueueSize > 1 )
{
mFeatureQueueSize /= 2;
}
else if ( timer.elapsed() < 50 && mFeatureQueueSize < 10000 )
{
mFeatureQueueSize *= 2;
}
}

if ( mFeatureQueue.empty() )
@@ -110,8 +110,6 @@ class QgsPostgresFeatureIterator : public QgsAbstractFeatureIteratorFromSource<Q

bool mIsTransactionConnection;

static const int FEATURE_QUEUE_SIZE;

private:
virtual bool providerCanSimplify( QgsSimplifyMethod::MethodType methodType ) const override;

5 comments on commit fbe4be8

@m-kuhn

This comment has been minimized.

Copy link
Member

@m-kuhn m-kuhn replied Mar 5, 2017

Hmm... This will take around 12 roundtrips before 2000 features are requested as before this patch.

I really think this value should be cached somewhere (connection pool?) so it doesn't need to be re-estimated for each iterator.

@nyalldawson

This comment has been minimized.

Copy link
Collaborator Author

@nyalldawson nyalldawson replied Mar 5, 2017

I couldn't work out a reasonable place to cache this. It really needs to be in the provider, but we can't set a value in the provider from an iterator.

Re the 12 trips to get to 2000 - that's why I tested this on the best case scenario where large queue sizes are mostly likely to benefit. At least on large datasets there's no regression here...

@m-kuhn

This comment has been minimized.

Copy link
Member

@m-kuhn m-kuhn replied Mar 5, 2017

What about the connection pool? I think that's a quite good place with a QHash that has connInfo + sql as key.

I don't think a benchmark on a local database should be treated as representative (latency < 1ms) ;)

I'm even wondering if this will not negatively impact situations where the speed (as in volume) is good, but the latency is bad. They will stick at 1 feature / request, although they would benefit a lot from bigger queue sizes.

I think it would be great to have the possibility to show the queue size next to the measured time in the message log (like the rendering times can be shown) to get a better overview of real world examples.

@nyalldawson

This comment has been minimized.

Copy link
Collaborator Author

@nyalldawson nyalldawson replied Mar 5, 2017

Ah sorry, I should stress that that benchmark was done as a way to test the best case which would benefit from a large queue. I'll be testing on a super slow server over the coming week to verify there's no regressions there either. At least from a using QGIS POV the change is day and night for the slow database.

@m-kuhn

This comment has been minimized.

Copy link
Member

@m-kuhn m-kuhn replied Mar 5, 2017

If it helps in your scenario, that's a really good start and worth it! I guess for 8% it's better and for 88% it's the same. What I wonder is the other 4% of the users.

What I'm trying to say is that there are different reasons for a connection being perceived as slow (latency vs bandwidth).
I think we should get some good benchmark results to tune min time / max time / default queue size - and decide if this is always good or if a queue size override needs to be available.

Please sign in to comment.
You can’t perform that action at this time.