MLeap: Deploy Spark Pipelines to Production
Scala Java Other
Failed to load latest commit information.
bundle-ml Add in MultinomialLabeler for easy handling of multinomial results. (#… Jan 18, 2017
bundle-protobuf @ c1185f5 Tensorflow transformer and data type updates (#102) Jan 16, 2017
mleap-avro Improve serialization of tensors to JSON. Jan 17, 2017
mleap-base Use same pattern as Akka for build files. (#51) Dec 24, 2016
mleap-core Add in MultinomialLabeler for easy handling of multinomial results. (#… Jan 18, 2017
mleap-runtime Add in MultinomialLabeler for easy handling of multinomial results. (#… Jan 18, 2017
mleap-spark-base Fix some more error handling and modify Tensorflow. (#105) Jan 18, 2017
mleap-spark-extension Add in MultinomialLabeler for easy handling of multinomial results. (#… Jan 18, 2017
mleap-spark-testkit Get rid of conflicting implicit functions. Jan 17, 2017
mleap-spark Fix some more error handling and modify Tensorflow. (#105) Jan 18, 2017
mleap-tensor Improve serialization of tensors to JSON. Jan 17, 2017
mleap-tensorflow Add in MultinomialLabeler for easy handling of multinomial results. (#… Jan 18, 2017
project Tensorflow transformer and data type updates (#102) Jan 16, 2017
python/mleap Bug Fix SimpleSparkSerializer deserializer (#32) Nov 7, 2016
travis Use SSH github URL instead of HTTP when releasing. Sep 30, 2016
.gitignore PySpark Bindings for Serialization and Deserialization (#31) Nov 6, 2016
.gitmodules Naive bayes (#92) Dec 30, 2016
.travis.yml Bug Fix SimpleSparkSerializer deserializer (#32) Nov 7, 2016
LICENSE Update LICENSE and Sep 29, 2016
NOTICE Implement RowTransformBuilder for transforming single rows. (#24) Oct 31, 2016 Add link to online documentation in README. Jan 17, 2017
build.sbt Tensorflow transformer and data type updates (#102) Jan 16, 2017
version.sbt Setting version to 0.5.0-SNAPSHOT Nov 7, 2016


Join the chat at Build Status

Deploying machine learning data pipelines and algorithms should not be a time-consuming or difficult task. MLeap allows data scientists and engineers to deploy machine learning pipelines from Spark and Scikit-learn to a portable format and execution engine.

Using the MLeap execution engine and serialization format, we provide a performant, portable and easy-to-integrate production library for machine learning data pipelines and algorithms.

For portability, we build our software on the JVM and only use serialization formats that are widely-adopted.

We also provide a high level of integration with existing technologies.

Our Goal: 1. Build data pipelines and train algorithms with Spark and Scikit-Learn 2. Serialize your pipeline and algorithm to Bundle.ML 3. Use MLeap to execute your pipeline and algorithm without the Spark/Scikit dependencies

Basic examples are localed below, but you can read Serializing a Spark ML Pipeline and Scoring with MLeap to gain a full sense of what is possible.


  1. Core execution engine implemented in Scala
  2. Spark, PySpark and Scikit-Learn support
  3. Export a model with Scikit-learn or Spark and execute it on the MLeap engine anywhere in the JVM
  4. Choose from 3 portable serialization formats (JSON, Protobuf, and Mixed)
  5. Implement your own custom data types and transformers for use with MLeap data frames and transformer pipelines
  6. extensive test coverage with full parity tests for Spark and MLeap pipelines
  7. optional Spark transformer extension to extend Spark's default transformer offerings


Documentation is available at


Link with Maven or SBT

MLeap is cross-compiled for Scala 2.10 and 2.11, so just replace 2.10 with 2.11 wherever you see it if you are running Scala version 2.11 and using a POM file for dependency management. Otherwise, use the %% operator if you are using SBT and the correct Scala version will be used.


libraryDependencies += "ml.combust.mleap" %% "mleap-runtime" % "0.5.0"



For Spark Integration


libraryDependencies += "ml.combust.mleap" %% "mleap-spark" % "0.5.0"



Spark Packages

$ bin/spark-shell --packages ml.combust.mleap:mleap-spark_2.11:0.5.0

Using the Library

For more complete examples, see our other Git repository: MLeap Demos

Create and Export a Spark Pipeline

The first step is to create our pipeline in Spark. For our example we will manually build a simple Spark ML pipeline.

import{StringIndexerModel, Binarizer}

// User out-of-the-box Spark transformers like you normally would
val stringIndexer = new StringIndexerModel(uid = "si", labels = Array("hello", "MLeap")).

val binarizer = new Binarizer(uid = "bin").

// Use the MLeap utility method to directly create an


// Without needing to fit an
val pipeline = SparkUtil.createPipelineModel(uid = "pipeline", Array(stringIndexer, binarizer))

import ml.combust.bundle.BundleFile
import ml.combust.mleap.spark.SparkSupport._
import resource._

for(modelFile <- managed(BundleFile("/tmp/"))) {
    // delete the file if it already exists
    // name our pipeline
    // save our pipeline to a zip file
    // we can save a file to any supported java.nio.FileSystem

Spark pipelines are not meant to be run outside of Spark. They require a DataFrame and therefore a SparkContext to run. These are expensive data structures and libraries to include in a project. With MLeap, there is no dependency on Spark to execute a pipeline. MLeap dependencies are lightweight and we use fast data structures to execute your ML pipelines.

Create and Export a Scikit-Learn Pipeline

# Load scikit-learn mleap extensions
import mleap.sklearn.pipeline
from import NDArrayToDataFrame

# Load scikit-learn transformers and models
from sklearn.preprocessing import LabelEncoder, Binarizer

# Define the Label Encoder (minit method adds a unique `name` to the transformer as well as explicit input/output features)
label_encoder_tf = LabelEncoder()
label_encoder_tf.minit(input_features = 'col_a', output_features='col_a_label_le')

# Convert output of Label Encoder to Data Frame instead of 1d-array
n_dim_array_to_df_tf = NDArrayToDataFrame(label_encoder_tf.output_features)

# Define our binarizer
binarizer = Binarizer(0.5)
binarizer.minit(input_features=n_dim_array_to_df_tf.output_features, output_features="{}_binarized".format(n_dim_array_to_df_tf.output_features))

data = pd.DataFrame(['a', 'b', 'c'], columns=['col_a'])

# Assemble the steps of our pipeline
steps = [
    (, label_encoder_tf),
    (, n_dim_array_to_df_tf),
    (, binarizer)

pipeline = Pipeline(steps)

# Fit the pipeline

# Write the pipeline to
pipeline.serialize_to_bundle('/tmp', 'simple-sk-pipeline', init=True)

Load and Transform Using MLeap

Becuase we export Spark and Scikit-learn pipelines to a standard format, we can use either our Spark-trained pipeline or our Scikit-learn pipeline from the previous steps to demonstrate usage of MLeap in this section. The choice is yours!

import ml.combust.bundle.BundleFile
import ml.combust.mleap.runtime.MleapSupport._
import resource._

// load the Spark pipeline we saved in the previous section
val bundle = (for(bundleFile <- managed(BundleFile("/tmp/"))) yield {

// create a simple LeapFrame to transform
import ml.combust.mleap.runtime.{Row, LeapFrame, LocalDataset}
import ml.combust.mleap.runtime.types._

// MLeap makes extensive use of monadic types like Try
val schema = StructType(StructField("test_string", StringType),
  StructField("test_double", DoubleType)).get
val data = LocalDataset(Row("hello", 0.6),
  Row("MLeap", 0.2))
val frame = LeapFrame(schema, data)

// transform the dataframe using our pipeline
val mleapPipeline = bundle.root
val frame2 = mleapPipeline.transform(frame).get
val data2 = frame2.dataset

// get data from the transformed rows and make some assertions
assert(data2(0).getDouble(2) == 0.0) // string indexer output
assert(data2(0).getDouble(3) == 1.0) // binarizer output

// the second row
assert(data2(1).getDouble(2) == 1.0)
assert(data2(1).getDouble(3) == 0.0)


For more documentation, please see our wiki, where you can learn to:

  1. implement custom transformers that will work with Spark, MLeap and Scikit-learn
  2. implement custom data types to transform with Spark and MLeap pipelines
  3. transform with blazing fast speeds using optimized row-based transformers
  4. serialize MLeap data frames to various formats like avro, json, and a custom binary format
  5. implement new serialization formats for MLeap data frames
  6. work through several demonstration pipelines which use real-world data to create predictive pipelines
  7. supported Spark transformers
  8. supported Scikit-learn transformers
  9. custom transformers provided by MLeap


  • Write documentation.
  • Write a tutorial/walkthrough for an interesting ML problem
  • Contribute an Estimator/Transformer from Spark
  • Use MLeap at your company and tell us what you think
  • Make a feature request or report a bug in github
  • Make a pull request for an existing feature request or bug report
  • Join the discussion of how to get MLeap into Spark as a dependency. Talk with us on Gitter (see link at top of

Contact Information


See LICENSE and NOTICE file in this repository.

Copyright 2016 Combust, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.