-
Notifications
You must be signed in to change notification settings - Fork 60
Best Practices guide for creation of good GeoParquet files (focused on distribution) #254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
cholmes
wants to merge
7
commits into
main
Choose a base branch
from
cholmes/distro-guide
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
paleolimbot
reviewed
Mar 13, 2025
Comment on lines
+220
to
+221
### Sedona | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to take out the comments (those were more for me writing this or for a future blog post).
@jiayuasu Is this about right?
Suggested change
### Sedona | |
### Sedona | |
```python | |
import glob | |
from sedona.spark import SedonaContext, GridType | |
from sedona.utils.structured_adapter import StructuredAdapter | |
from sedona.sql.st_functions import ST_GeoHash | |
# Configuring this line to do the right thing can be tricky | |
# https://sedona.apache.org/latest/setup/install-python/?h=python#prepare-sedona-spark-jar | |
config = ( | |
SedonaContext.builder() | |
.config("spark.executor.memory", "6G") | |
.config("spark.driver.memory", "6G") | |
.getOrCreate() | |
) | |
sedona = SedonaContext.create(config) | |
# Read from GeoParquet or some other datasource + do any spatial ops/transformations | |
# using Sedona pyspark or SQL | |
df = sedona.read.format("geoparquet").load( | |
"/Users/dewey/gh/geoarrow-data/microsoft-buildings/files/microsoft-buildings_point_geo.parquet" | |
) | |
# Create the partitioning. KDBTREE provides a nice balance providing | |
# tight (but well-separated) partitions with approximately equal numbers of | |
# features in each file. Note that num_partitions is only a suggestion | |
# (actual value may differ) | |
rdd = StructuredAdapter.toSpatialRdd(df, "geometry") | |
rdd.analyze() | |
# We call the WithoutDuplicates() variant to ensure that we don't introduce | |
# duplicate features (i.e., each feature is assigned a single partition instead of | |
# each feature being assigned to every partition it intersects). For points the | |
# behaviour of spatialPartitioning() and spatialPartitioningWithoutDuplicates() | |
# is identical. | |
rdd.spatialPartitioningWithoutDuplicates(GridType.KDBTREE, num_partitions=8) | |
# Get the grids for this partitioning (you can reuse this partitioning | |
# by passing it to some other spatialPartitioningWithoutDuplicates() to | |
# ensure a different write has identical partition extents) | |
rdd.getPartitioner().getGrids() | |
df_partitioned = StructuredAdapter.toSpatialPartitionedDf(rdd, sedona) | |
# Optional: sort within partitions for tighter rowgroup bounding boxes within files | |
df_partitioned = ( | |
df_partitioned.withColumn("geohash", ST_GeoHash(df_partitioned.geometry, 12)) | |
.sortWithinPartitions("geohash") | |
.drop("geohash") | |
) | |
# Write in parallel directly from each executor node. This scales nicely to | |
# (much) bigger-than-memory data, particularly if done with a configured cluster | |
# (e.g., Databricks, Glue, Wherobots). | |
# There are several options for geoparquet writing: | |
# https://sedona.apache.org/latest/tutorial/files/geoparquet-sedona-spark/ | |
df_partitioned.write.format("geoparquet").mode("overwrite").save( | |
"buildings_partitioned" | |
) | |
# The output files have funny names because Spark writes them this way | |
files = glob.glob("buildings_partitioned/*.parquet") | |
len(files) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. LGTM!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Attempt to pull together recommendations / best practices as discussed in #251.
More work needed, feedback / help is very welcome. Likely more to discuss to get the recommendations right, but wanted to put up something for people to react to.