You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Don't wastefully recalculate memory provider extent after every
feature addition
Previously, the memory provider would automatically recalculate
the extent of the layer after new features are added by
looping through the entire set of existing features and calculating
the bounding boxes. This is very wasteful, as many code paths
add features one-by-one, so with every new feature added to
the provider every existing feature is iterated over. This caused
memory layers to slow to a crawl after many features are added.
This commit improves the logic so that IF an existing layer
extent is known, then it's updated on the fly as each individual
feauture is added. Instead of looping through all features, we
just expand the existing known extent with the added features
bounds. If the extent isn't known, we just invalidate it
when adding/deleting/modifying features, and defer the actual
extent calculation until it's next requested.
Makes memory layers many thousands of magnitudes of orders faster
when adding lots of features (e.g. when memory providers
are used as temporary outputs in processing)
0 commit comments