The code in this repository demonstrates best practice when working with Kedro and PySpark on Databricks. It contains a Kedro starter template with some initial configuration and an example pipeline, it accompanies the documentation on developing and deploying Kedro projects on Databricks.
This starter contains a project created with example code based on the familiar Iris dataset.
The starter template can be used to start a new project using the starter
option in kedro new
:
kedro new --starter=databricks-iris
This starter has a base configuration that allows it to run natively on Databricks. Directories to store data and logs still need to be manually created in the user's Databricks DBFS instance:
/dbfs/FileStore/iris_databricks/data
/dbfs/FileStore/iris_databricks/logs
See the documentation on deploying a packaged Kedro project to Databricks for more information.
While Spark allows you to specify many different configuration options, this starter uses /conf/base/spark.yml
as a single configuration location.
This Kedro starter contains the initialisation code for SparkSession
in the ProjectContext
and takes its configuration from /conf/base/spark.yml
. Modify this code if you want to further customise your SparkSession
, e.g. to use YARN.
Out of the box, Kedro's MemoryDataset
works with Spark's DataFrame
. However, it doesn't work with other Spark objects such as machine learning models unless you add further configuration. This Kedro starter demonstrates how to configure MemoryDataset
for Spark's machine learning model in the catalog.yml
.
Note: The use of
MemoryDataset
is encouraged to propagate Spark'sDataFrame
between nodes in the pipeline. A best practice is to delay triggering Spark actions for as long as needed to take advantage of Spark's lazy evaluation.
This Kedro starter uses the simple and familiar Iris dataset. It contains the code for an example machine learning pipeline that runs a 1-nearest neighbour classifier to classify an iris. Transcoding is used to convert the Spark Dataframes into pandas DataFrames after splitting the data into training and testing sets.
The pipeline includes:
- A node to split the data into training dataset and testing dataset using a configurable ratio
- A node to run a simple 1-nearest neighbour classifier and make predictions
- A node to report the accuracy of the predictions performed by the model