diff --git a/docs/setup/overview.md b/docs/setup/overview.md
index 2544347f8b..02cc30b2a4 100644
--- a/docs/setup/overview.md
+++ b/docs/setup/overview.md
@@ -13,6 +13,7 @@
- [x] Spatial DataFrame/SQL on Spark
- [x] Spatial DataStream on Flink
- [x] Spatial Table/SQL on Flink
+- [x] Spatial SQL on Snowflake
## Complex spatial objects
@@ -28,6 +29,8 @@
## Rich spatial analytics tools
- [x] Coordinate Reference System / Spatial Reference System Transformation
-- [x] High resolution map generation: [Visualize Spatial DataFrame/RDD](../../tutorial/viz)
-- [x] Apache Zeppelin integration
+- [x] Apache Zeppelin dashboard integration
+- [X] Integrate with a variety of Python tools including Jupyter notebook, GeoPandas, Shapely
+- [X] Integrate with a variety of visualization tools including KeplerGL, DeckGL
+- [x] High resolution and scalable map generation: [Visualize Spatial DataFrame/RDD](../../tutorial/viz)
- [x] Support Scala, Java, Python, R
diff --git a/docs/setup/snowflake/install.md b/docs/setup/snowflake/install.md
index 58034fe81f..0e83b61db7 100644
--- a/docs/setup/snowflake/install.md
+++ b/docs/setup/snowflake/install.md
@@ -20,11 +20,11 @@ A stage is a Snowflake object that maps to a location in a cloud storage provide
In this case, we will create a stage named `ApacheSedona` in the `public` schema of the database created in the previous step. The stage will be used to load Sedona's JAR files into the database. We will choose a `Snowflake managed` stage.
-![Create Stage](../../../../image/snowflake/snowflake-1.png)
+
After creating the stage, you should be able to see the stage in the database.
-![Stage Details](../../../../image/snowflake/snowflake-2.png)
+
You can refer to [Snowflake Documentation](https://docs.snowflake.com/en/sql-reference/sql/create-stage.html) to how to create a stage.
@@ -39,7 +39,7 @@ Then you can upload the 2 JAR files to the stage created in the previous step.
After uploading the 2 JAR files, you should be able to see the 2 JAR files in the stage.
-![JAR Details](../../../../image/snowflake/snowflake-3.png)
+
You can refer to [Snowflake Documentation](https://docs.snowflake.com/en/sql-reference/sql/put.html) to how to upload files to a stage.
@@ -49,17 +49,17 @@ A schema is a Snowflake object that maps to a database. You can use a schema to
In this case, we will create a schema named `SEDONA` in the database created in the previous step. The schema will be used to create Sedona's functions.
-![Create Schema](../../../../image/snowflake/snowflake-4.png)
+
You can find your schema in the database as follows:
-![Schema Details](../../../../image/snowflake/snowflake-5.png)
+
You can refer to [Snowflake Documentation](https://docs.snowflake.com/en/sql-reference/sql/create-schema.html) to how to create a schema.
## Step 4: Get the SQL script for creating Sedona's functions
-You will need to download [sedona-snowflake.sql](../../../../image/snowflake/sedona-snowflake.sql) to create Sedona's functions in the schema created in the previous step.
+You will need to download [sedona-snowflake.sql](./../../../image/snowflake/sedona-snowflake.sql) to create Sedona's functions in the schema created in the previous step.
You can also get this SQL script by running the following command:
@@ -75,11 +75,11 @@ We will create a worksheet in the database created in the previous step, and run
In this case, we will choose the option `Create Worksheet from SQL File`.
-![Create Worksheet](../../../../image/snowflake/snowflake-6.png)
+
In the worksheet, choose `SEDONA_TEST` as the database, and `PUBLIC` as the schema. The SQL script should be in the worksheet. Then right click the worksheet and choose `Run All`. Snowflake will take 3 minutes to create Sedona's functions.
-![Worksheet](../../../../image/snowflake/snowflake-7.png)
+
## Step 6: Verify the installation
@@ -97,4 +97,4 @@ SRID=4326;POINT (1 2)
The worksheet should look like this:
-![Worksheet](../../../../image/snowflake/snowflake-8.png)
+
diff --git a/docs/tutorial/snowflake/sql.md b/docs/tutorial/snowflake/sql.md
index ba8f9c11ff..271c1e174d 100644
--- a/docs/tutorial/snowflake/sql.md
+++ b/docs/tutorial/snowflake/sql.md
@@ -424,11 +424,11 @@ SRID=0;POINT(1 2)
1. Sedona Snowflake doesn't support `M` dimension due to the limitation of WKB serialization. Sedona Spark and Sedona Flink support XYZM because it uses our in-house serialization format. Although Sedona Snowflake has functions related to `M` dimension, all `M` values will be ignored.
2. Sedona H3 functions are not supported because Snowflake does not allow embedded C code in UDF.
3. All User Defined Table Functions only work with geometries created by Sedona constructors due to Snowflake current limitation `Data type GEOMETRY is not supported in non-SQL UDTF return type`. This includes:
- * ST_MinimumBoundingRadius
- * ST_Intersection_Aggr
- * ST_SubDivideExplode
- * ST_Envelope_Aggr
- * ST_Union_Aggr
- * ST_Collect
- * ST_Dump
+ * ST_MinimumBoundingRadius
+ * ST_Intersection_Aggr
+ * ST_SubDivideExplode
+ * ST_Envelope_Aggr
+ * ST_Union_Aggr
+ * ST_Collect
+ * ST_Dump
4. Only Sedona ST functions are available in Snowflake. Raster functions (RS functions) are not available in Snowflake yet.
diff --git a/mkdocs.yml b/mkdocs.yml
index 4defb65cf2..1f3b533cc8 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -1,5 +1,5 @@
site_name: Apache Sedona™
-site_description: Apache Sedona™ is a cluster computing system for processing large-scale spatial data. Sedona extends existing cluster computing systems, such as Apache Spark and Apache Flink, with a set of out-of-the-box distributed Spatial Datasets and Spatial SQL that efficiently load, process, and analyze large-scale spatial data across machines.
+site_description: Apache Sedona™ is a cluster computing system for processing large-scale spatial data. Sedona extends existing cluster computing systems, such as Apache Spark, Apache Flink, and Snowflake, with a set of out-of-the-box distributed Spatial Datasets and Spatial SQL that efficiently load, process, and analyze large-scale spatial data across machines.
nav:
- Home: index.md
- Setup: