Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Multipolygons for more efficient joins #104

geoHeil opened this issue Jun 11, 2017 · 1 comment


Copy link

commented Jun 11, 2017

I have polygons which look like and there are lots of them which lead to a gigantic consumption of RAM #91 as well as a very cpu intensive task for

Description of my case: as you see in the link an object will emit 2-3k polygons and I have roughly speaking 100000 objects. For each polygon there is an ID which links back to the object and a value. When aggregating the polygons to multi-polygons (one per object per value) their number and thus the number of lookups should shrink considerably. Is there a possibility to implement this in geospark?


This comment has been minimized.

Copy link

commented Jun 11, 2017


Doing an ST_Intersection is much slower than relation checks such as ST_Intersects , ST_CoveredBy, and , ST_Within . In many situations you know the intersection of 2 geometries without actually computing an intersection. In these cases, you can skip the costly ST_Intersection call

couldnt be optimized to take advantage of this? (

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.