Skip to content
Basic framework utilities to quickly start writing production ready Apache Spark applications
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github
docs
project
src
.gitignore
.travis.yml
LICENSE Initial commit Apr 14, 2018
README.md
RELEASE-NOTES.md
build.sbt
version.sbt

README.md

Spark Utils

Maven Central   GitHub   Travis (.org)   Codecov   Javadocs Gitter   Twitter  

Motivation

One of the biggest challenges after taking the first steps into the world of writing Apache Spark applications in Scala is taking them to production.

An application of any kind needs to be easy to run and easy to configure.

This project is trying to help developers write Spark applications focusing mainly on the application logic rather than the details of configuring the application and setting up the Spark context.

Description

This project contains some basic utilities that can help setting up a Spark application project.

The main point is the simplicity of writing Apache Spark applications just focusing on the logic, while providing for easy configuration and arguments passing.

The code sample bellow shows how easy can be to write a file format converter from any acceptable type, with any acceptable parsing configuration options to any acceptable format.

object FormatConverterExample extends SparkApp[FormatConverterContext, DataFrame] {
  override def createContext(config: Config) = FormatConverterContext(config).get
  override def run(implicit spark: SparkSession, context: FormatConverterContext): DataFrame = {
    val inputData = spark.source(context.input).read
    inputData.sink(context.output).write
  }
}

Creating the configuration can be as simple as defining a case class to hold the configuration and a factory, that helps extract simple and complex data types like input sources and output sinks.

case class FormatConverterContext(input: FormatAwareDataSourceConfiguration,
                                  output: FormatAwareDataSinkConfiguration)

object FormatConverterContext extends Configurator[FormatConverterContext] {
    config.extract[FormatAwareDataSourceConfiguration]("input") |@|
      config.extract[FormatAwareDataSinkConfiguration]("output") apply
      FormatConverterContext.apply
  }
}

Optionally, the SparkFun can be used instead of SparkApp to make hte code even more concise.

object FormatConverterExample extends 
          SparkFun[FormatConverterContext, DataFrame](FormatConverterContext(_).get) {
  override def run(implicit spark: SparkSession, context: FormatConverterContext): DataFrame = 
    spark.source(context.input).read.sink(context.output).write
}

For structured streaming applications the format converter might look like this:

object StreamingFormatConverterExample extends SparkApp[StreamingFormatConverterContext, DataFrame] {
  override def createContext(config: Config) = StreamingFormatConverterContext(config).get
  override def run(implicit spark: SparkSession, context: StreamingFormatConverterContext): DataFrame = {
    val inputData = spark.source(context.input).read
    inputData.streamingSink(context.output).write.awaitTermination()
  }
}

The streaming configuration the configuration can be as simple as following:

case class StreamingFormatConverterContext(input: FormatAwareStreamingSourceConfiguration, 
                                           output: FormatAwareStreamingSinkConfiguration)

object StreamingFormatConverterContext extends Configurator[StreamingFormatConverterContext] {
  def validationNel(config: Config): ValidationNel[Throwable, StreamingFormatConverterContext] = {
    config.extract[FormatAwareStreamingSourceConfiguration]("input") |@|
      config.extract[FormatAwareStreamingSinkConfiguration]("output") apply
      StreamingFormatConverterContext.apply
  }
}

The SparkRunnable and SparkApp together with the configuration framework provide for easy Spark application creation with configuration that can be managed through configuration files or application parameters.

The IO frameworks for reading and writing data frames add extra convenience for setting up batch and structured streaming jobs that transform various types of files and streams.

Last but not least, there are many utility functions that provide convenience for loading resources, dealing with schemas and so on.

Most of the common features are also implemented as decorators to main Spark classes, like SparkContext, DataFrame and StructType and they are conveniently available by importing the org.tupol.spark.implicits._ package.

Documentation

The documentation for the main utilities and frameworks available:

Latest stable API documentation is available here.

An extensive tutorial and walk-through can be found here. Extensive samples and demos can be found here.

A nice example on how this library can be used can be found in the spark-tools project, through the implementation of a generic format converter and a SQL processor for both batch and structured streams.

Prerequisites

  • Java 6 or higher
  • Scala 2.11 or 2.12
  • Apache Spark 2.3.X

Getting Spark Utils

Spark Utils is published to Sonatype OSS and Maven Central:

  • Group id / organization: org.tupol
  • Artifact id / name: spark-utils
  • Latest version is 0.4.0

Usage with SBT, adding a dependency to the latest version of tools to your sbt build definition file:

libraryDependencies += "org.tupol" %% "spark-utils" % "0.4.0"

Starting a New spark-utils Project

The simplest way to start a new spark-utils is to make use of the spark-apps.seed.g8 template project.

To fill in manually the project options run

g8 tupol/spark-apps.seed.g8

The default options look like the following:

name [My Project]:
appname [My First App]:
organization [my.org]:
version [0.0.1-SNAPSHOT]:
package [my.org.my_project]:
classname [MyFirstApp]:
scriptname [my-first-app]:
scalaVersion [2.11.12]:
sparkVersion [2.4.0]:
sparkUtilsVersion [0.4.0]:

To fill in the options in advance

g8 tupol/spark-apps.seed.g8 --name="My Project" --appname="My App" --organization="my.org" --force

What's new?

0.4.1-SNAPSHOT

  • Added SparkFun, a convenience wrapper around SparkApp that makes the code even more concise

0.4.0

  • Added the StreamingConfiguration marker trait
  • Added GenericStreamDataSource, FileStreamDataSource and KafkaStreamDataSource
  • Added GenericStreamDataSink, FileStreamDataSink and KafkaStreamDataSink
  • Added FormatAwareStreamingSourceConfiguration and FormatAwareStreamingSinkConfiguration
  • Extracted TypesafeConfigBuilder
  • API Changes: Added a new type parameter to the DataSink that describes the type of the output
  • Improved unit test coverage

For previous versions please consult the release notes.

License

This code is open source software licensed under the MIT License.

You can’t perform that action at this time.