Skip to content

type-class based data cleansing library for Apache Spark SQL

License

Notifications You must be signed in to change notification settings

vvgsrk/cleanframes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Clean Frames

Build Status

Clean Frames is a library for Apache Spark SQL module. It provides a type class for data cleansing.

Getting Clean Frames

The current stable version is 0.3.0, which is cross built against Scala (2.11-2.12) and Apache Spark (2.1.0-2.4.3).

If you're using SBT, add the following line to your build file:

libraryDependencies += "io.funkyminds" %% "cleanframes" % "2.4.3_0.3.0"

or Maven dependency:

<dependency>
  <groupId>io.funkyminds</groupId>
  <artifactId>cleanframes_2.12</artifactId>
  <version>2.4.3_0.3.0</version>
</dependency>

Quick Start

Assuming DataFrame is loaded from a csv file with following content:

1,true,1.0
lmfao,true,2.0
3,false,3.0
4,true,yolo data
5,true,5.0

and a domain model is defined as:

case class Example(col1: Option[Int], col2: Option[Boolean], col3: Option[Float])

library clean data to:

Example(Some(1),  Some(true),   Some(1.0f)),
Example(None,     Some(true),   Some(2.0f)),
Example(Some(3),  Some(false),  Some(3.0f)),
Example(Some(4),  Some(true),   None),
Example(Some(5),  Some(true),   Some(5.0f))

with a minimal code:

import cleanframes.instances.all._
import cleanframes.syntax._
  
frame
  .clean[Example]

What is so different?

We would like to live in a world where data quality is superb but only unicorns are perfect.

Apache Spark by default discards entire row if it contains any invalid values.

Having called Spark for same data:

frame
  .as[Example]

would give a dataset with content:

Example(Some(1),  Some(true),   Some(1.0f)),
Example(None,     None,         None),
Example(Some(3),  Some(false),  Some(3.0f)),
Example(None,     None,         None),
Example(Some(5),  Some(true),   Some(5.0f))

As noticed, data in second and forth rows are lost due to particular malformed cells. Such behaviour might not be accepted in some domains.

Pure Spark-SQL API

To save valid cells and discard only invalid ones, such Spark SQL API might be called:

val cleaned = frame.withColumn(
  "col1",
  when(
    not(
      frame.col("col1").isNaN
    ),
    frame.col("col1")
  ) cast IntegerType
).withColumn(
  "col2",
  when(
    trim(lower(frame.col("col2"))) === "true",
    lit(true)
  ).otherwise(false)
).withColumn(
  "col3",
  when(
    not(
      frame.col("col3").isNaN
    ),
    frame.col("col3")
  ) cast FloatType
)

cleanframes

cleanframes is a small library that does such boilerplate as above for you by calling:

frame
  .clean[CaseClass]

It resolves type-related transformations in a compile time using implicit resolutions in a type-safe way.

The library is shipped with common basic transformations and can be extended via custom ones.

There is no performance penalty, all code is generated by the compiler (currently by shapeless).

Instructions

For further instructions, refer to:

FAQ

  • Why minimal Spark version is 2.1.0 when Datasets where introduced in 1.6.0?

    There is a problem with value classes support in versions 2.0.x, Spark throws runtime exception during its code generation. Spark in 1.6.x has a problem with testing library.

Contributors

About

type-class based data cleansing library for Apache Spark SQL

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 98.0%
  • Perl 2.0%