From 0b81fd5e75dea77834cbfee10da543db8fa1de48 Mon Sep 17 00:00:00 2001 From: Dongjoon Hyun Date: Fri, 13 Sep 2024 15:54:44 -0700 Subject: [PATCH 1/2] [SPARK-49649][DOCS] Make `docs/index.md` up-to-date for 4.0.0 --- docs/index.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/docs/index.md b/docs/index.md index 7e57eddb6da86..4b8a6b9ba6c35 100644 --- a/docs/index.md +++ b/docs/index.md @@ -34,9 +34,8 @@ source, visit [Building Spark](building-spark.html). Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it should run on any platform that runs a supported version of Java. This should include JVMs on x86_64 and ARM64. It's easy to run locally on one machine --- all you need is to have `java` installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation. -Spark runs on Java 17/21, Scala 2.13, Python 3.8+, and R 3.5+. -When using the Scala API, it is necessary for applications to use the same version of Scala that Spark was compiled for. -For example, when using Scala 2.13, use Spark compiled for 2.13, and compile code/applications for Scala 2.13 as well. +Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+. +When using the Scala API, it is necessary for applications to use the same version of Scala that Spark was compiled for. Since Spark 4.0.0, it's Scala 2.13. # Running the Examples and Shell @@ -110,7 +109,7 @@ options for deployment: * [Spark Streaming](streaming-programming-guide.html): processing data streams using DStreams (old API) * [MLlib](ml-guide.html): applying machine learning algorithms * [GraphX](graphx-programming-guide.html): processing graphs -* [SparkR](sparkr.html): processing data with Spark in R +* [SparkR (Deprecated)](sparkr.html): processing data with Spark in R * [PySpark](api/python/getting_started/index.html): processing data with Spark in Python * [Spark SQL CLI](sql-distributed-sql-engine-spark-sql-cli.html): processing data with SQL on the command line @@ -128,10 +127,13 @@ options for deployment: * [Cluster Overview](cluster-overview.html): overview of concepts and components when running on a cluster * [Submitting Applications](submitting-applications.html): packaging and deploying applications * Deployment modes: - * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you launch a cluster on EC2 in about 5 minutes * [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without a third-party cluster manager * [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN) - * [Kubernetes](running-on-kubernetes.html): deploy Spark on top of Kubernetes + * [Kubernetes](running-on-kubernetes.html): deploy Spark apps on top of Kubernetes directly + * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you launch a cluster on EC2 in about 5 minutes +* [Spark Kubernetes Operator](https://github.com/apache/spark-kubernetes-operator): + * [SparkApp](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/pyspark-pi.yaml): deploy Spark apps on top of Kubernetes via [operator patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) + * [SparkCluster](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/cluster-with-template.yaml): deploy Spark clusters on top of Kubernetes via [operator patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) **Other Documents:** From 920f39e6b59e2adc4b9044ce3fc6fafc9a056657 Mon Sep 17 00:00:00 2001 From: Dongjoon Hyun Date: Fri, 13 Sep 2024 20:47:44 -0700 Subject: [PATCH 2/2] Update docs/index.md Co-authored-by: Kent Yao --- docs/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.md b/docs/index.md index 4b8a6b9ba6c35..fea62865e2160 100644 --- a/docs/index.md +++ b/docs/index.md @@ -34,7 +34,7 @@ source, visit [Building Spark](building-spark.html). Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it should run on any platform that runs a supported version of Java. This should include JVMs on x86_64 and ARM64. It's easy to run locally on one machine --- all you need is to have `java` installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation. -Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+. +Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+ (Deprecated). When using the Scala API, it is necessary for applications to use the same version of Scala that Spark was compiled for. Since Spark 4.0.0, it's Scala 2.13. # Running the Examples and Shell