Permalink
Browse files

Bump version to v0.14

  • Loading branch information...
mhamilton723 committed Sep 23, 2018
1 parent 25f3601 commit 7eed833156f507357c8b43aefcf77ddfab0f2be9
Showing with 32 additions and 27 deletions.
  1. +23 −18 README.md
  2. +3 −3 docs/R-setup.md
  3. +4 −4 docs/docker.md
  4. +2 −2 docs/gpu-setup.md
View
@@ -5,15 +5,20 @@ Microsoft Machine Learning for Apache Spark
<img title="Build Status" align="right"
src="https://mmlspark.azureedge.net/icons/BuildStatus.svg" />
MMLSpark provides a number of deep learning and data science tools for [Apache
Spark](https://github.com/apache/spark), including seamless integration of
Spark Machine Learning pipelines with [Microsoft Cognitive Toolkit
(CNTK)](https://github.com/Microsoft/CNTK) and
[OpenCV](http://www.opencv.org/), enabling you to quickly create powerful,
highly-scalable predictive and analytical models for large image and text
datasets.
MMLSpark requires Scala 2.11, Spark 2.1+, and either Python 2.7 or Python 3.5+.
MMLSpark is an ecosytem of tools aimed to expand the distributed computing framework
[Apache Spark](https://github.com/apache/spark) in several new directions.
MMLSpark adds a number of deep learning and data science tools to the Spark ecosystem,
including seamless integration of Spark Machine Learning pipelines with [Microsoft Cognitive Toolkit
(CNTK)](https://github.com/Microsoft/CNTK), [LightGBM](https://github.com/Microsoft/LightGBM) and
[OpenCV](http://www.opencv.org/). This enables powerful and highly-scalable predictive and analytical models
for a variety of datasources.
MMLSpark also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users
can embed **any** web service into their SparkML models. In this vein, MMLSpark provides easy to use
SparkML transformers for a wide variety of [Microsoft Cognitive Services](https://azure.microsoft.com/en-us/services/cognitive-services/). For production grade deployment, the Spark Serving project enables high throughput,
sub-millisecond latency web services, backed by your Spark cluster.
MMLSpark requires Scala 2.11, Spark 2.3+, and either Python 2.7 or Python 3.5+.
See the API documentation [for
Scala](http://mmlspark.azureedge.net/docs/scala/) and [for
PySpark](http://mmlspark.azureedge.net/docs/pyspark/).
@@ -151,9 +156,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:
```bash
spark-shell --packages Azure:mmlspark:0.13
pyspark --packages Azure:mmlspark:0.13
spark-submit --packages Azure:mmlspark:0.13 MyApp.jar
spark-shell --packages Azure:mmlspark:0.14
pyspark --packages Azure:mmlspark:0.14
spark-submit --packages Azure:mmlspark:0.14 MyApp.jar
```
This can be used in other Spark contexts too, for example, you can use MMLSpark
@@ -168,14 +173,14 @@ cloud](http://community.cloud.databricks.com), create a new [library from Maven
coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
in your workspace.
For the coordinates use: `Azure:mmlspark:0.13`. Ensure this library is
For the coordinates use: `Azure:mmlspark:0.14`. Ensure this library is
attached to all clusters you create.
Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.
You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:
```https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.13.dbc```
```https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.14.dbc```
### Docker
@@ -208,7 +213,7 @@ the above example, or from python:
```python
import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", "Azure:mmlspark:0.13") \
.config("spark.jars.packages", "Azure:mmlspark:0.14") \
.getOrCreate()
import mmlspark
```
@@ -224,7 +229,7 @@ running script actions, see [this
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).
The script action url is:
<https://mmlspark.azureedge.net/buildartifacts/0.13/install-mmlspark.sh>.
<https://mmlspark.azureedge.net/buildartifacts/0.14/install-mmlspark.sh>.
If you're using the Azure Portal to run the script action, go to `Script
actions``Submit new` in the `Overview` section of your cluster blade. In
@@ -240,7 +245,7 @@ your `build.sbt`:
```scala
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.13"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.14"
```
### Building from source
@@ -314,4 +319,4 @@ PMML](https://github.com/alipay/jpmml-sparkml-lightgbm)
*Apache®, Apache Spark, and Spark® are either registered trademarks or
trademarks of the Apache Software Foundation in the United States and/or other
countries.*
countries.*
View
@@ -10,7 +10,7 @@ To install the current MMLSpark package for R use:
```R
...
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.13.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.14.zip")
...
```
@@ -23,7 +23,7 @@ It will take some time to install all dependencies. Then, run:
library(sparklyr)
library(dplyr)
config <- spark_config()
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.13"
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.14"
sc <- spark_connect(master = "local", config = config)
...
```
@@ -83,7 +83,7 @@ and then use spark_connect with method = "databricks":
```R
install.packages("devtools")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.13.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.14.zip")
library(sparklyr)
library(dplyr)
sc <- spark_connect(method = "databricks")
View
@@ -29,7 +29,7 @@ You can now select one of the sample notebooks and run it, or create your own.
In the above, `microsoft/mmlspark` specifies the project and image name that you
want to run. There is another component implicit here which is the *tag* (=
version) that you want to use — specifying it explicitly looks like
`microsoft/mmlspark:0.13` for the `0.13` tag.
`microsoft/mmlspark:0.14` for the `0.14` tag.
Leaving `microsoft/mmlspark` by itself has an implicit `latest` tag, so it is
equivalent to `microsoft/mmlspark:latest`. The `latest` tag is identical to the
@@ -47,7 +47,7 @@ that you will probably want to use can look as follows:
-e ACCEPT_EULA=y \
-p 127.0.0.1:80:8888 \
-v ~/myfiles:/notebooks/myfiles \
microsoft/mmlspark:0.13
microsoft/mmlspark:0.14
```
In this example, backslashes are used to break things up for readability; you
@@ -59,7 +59,7 @@ path and line breaks looks a little different:
-e ACCEPT_EULA=y `
-p 127.0.0.1:80:8888 `
-v C:\myfiles:/notebooks/myfiles `
microsoft/mmlspark:0.13
microsoft/mmlspark:0.14
```
Let's break this command and go over the meaning of each part:
@@ -143,7 +143,7 @@ Let's break this command and go over the meaning of each part:
model.write().overwrite().save('myfiles/myTrainedModel.mml')
```
* **`microsoft/mmlspark:0.13`**
* **`microsoft/mmlspark:0.14`**
Finally, this specifies an explicit version tag for the image that we want to
run.
View
@@ -26,7 +26,7 @@ to check availability in your data center.
MMLSpark provides an Azure Resource Manager (ARM) template to create a
default setup that includes an HDInsight cluster and a GPU machine for
training. The template can be found here:
<https://mmlspark.azureedge.net/buildartifacts/0.13/deploy-main-template.json>.
<https://mmlspark.azureedge.net/buildartifacts/0.14/deploy-main-template.json>.
It has the following parameters that configure the HDI Spark cluster and
the associated GPU VM:
@@ -69,7 +69,7 @@ GPU VM setup template at experimentation time.
### 1. Deploy an ARM template within the [Azure Portal](https://ms.portal.azure.com/)
[Click here to open the above main
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.13%2Fdeploy-main-template.json)
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.14%2Fdeploy-main-template.json)
in the Azure portal.
(If needed, you click the **Edit template** button to view and edit the

0 comments on commit 7eed833

Please sign in to comment.