PERF: improve reading of geoarrow encoded Parquet (avoid converting coords to geopandas object dtype) #3322
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This reduces the time to read a simple geoparquet file with 10 million points from about 6 seconds to about 3 seconds.
Surprisingly, this also reduces the time in the case of reading such a file with the default WKB encoding, because for some reason the conversion in pyarrow for variable size binary column to numpy is faster than to pandas (which doesn't really make sense since both create the same object dtype array, will have to investigate and report upstream to pyarrow)
This takes a similar approach as the GeoArrow import code (#3301), i.e. only converting the attributes from Arrow -> Pandas, and then separately the geometry columns. In case of geoarrow-encoded columns, this avoids converting the nested struct to python lists/dictionaries (which we then don't use anyway, because we create the geometries directly from the raw Arrow data).
It is a bit unfortunate that this logic is a bit duplicated between arrow.py and geoarrow.py. But the problem is that for
from_arrow
the logic is based on checking the arrow extension metadata, while for GeoParquet we need to support generic files that might not have that Arrow-specific metadata.(it's also a bit disappointing how creating Points from x/y values is only slightly faster than parsing WKB, but that's something to profile on the shapely side)