Browse files

Bump version to 0.13

  • Loading branch information...
mhamilton723 committed Jun 28, 2018
1 parent 05c5e96 commit ad2055634f82f250d8f53cfb8b88dfa4124cbaab
Showing with 22 additions and 22 deletions.
  1. +7 −7
  2. +2 −2 docs/
  3. +4 −4 docs/
  4. +9 −9 docs/
@@ -137,9 +137,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:
spark-shell --packages Azure:mmlspark:0.12
pyspark --packages Azure:mmlspark:0.12
spark-submit --packages Azure:mmlspark:0.12 MyApp.jar
spark-shell --packages Azure:mmlspark:0.13
pyspark --packages Azure:mmlspark:0.13
spark-submit --packages Azure:mmlspark:0.13 MyApp.jar
This can be used in other Spark contexts too, for example, you can use MMLSpark
@@ -156,7 +156,7 @@ the above example, or from python:
import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", "Azure:mmlspark:0.12") \
.config("spark.jars.packages", "Azure:mmlspark:0.13") \
import mmlspark
@@ -172,7 +172,7 @@ running script actions, see [this
The script action url is:
If you're using the Azure Portal to run the script action, go to `Script
actions``Submit new` in the `Overview` section of your cluster blade. In
@@ -188,7 +188,7 @@ cloud](, create a new [library from Maven
in your workspace.
For the coordinates use: `Azure:mmlspark:0.12`. Ensure this library is
For the coordinates use: `Azure:mmlspark:0.13`. Ensure this library is
attached to all clusters you create.
Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.
@@ -202,7 +202,7 @@ your `build.sbt`:
resolvers += "MMLSpark Repo" at ""
libraryDependencies += "" %% "mmlspark" % "0.12"
libraryDependencies += "" %% "mmlspark" % "0.13"
### Building from source
@@ -10,7 +10,7 @@ To install the current MMLSpark package for R use:
@@ -23,7 +23,7 @@ It will take some time to install all dependencies. Then, run:
config <- spark_config()
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.12"
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.13"
sc <- spark_connect(master = "local", config = config)
@@ -29,7 +29,7 @@ You can now select one of the sample notebooks and run it, or create your own.
In the above, `microsoft/mmlspark` specifies the project and image name that you
want to run. There is another component implicit here which is the *tag* (=
version) that you want to use — specifying it explicitly looks like
`microsoft/mmlspark:0.12` for the `0.12` tag.
`microsoft/mmlspark:0.13` for the `0.13` tag.
Leaving `microsoft/mmlspark` by itself has an implicit `latest` tag, so it is
equivalent to `microsoft/mmlspark:latest`. The `latest` tag is identical to the
@@ -47,7 +47,7 @@ that you will probably want to use can look as follows:
-p \
-v ~/myfiles:/notebooks/myfiles \
In this example, backslashes are used to break things up for readability; you
@@ -59,7 +59,7 @@ path and line breaks looks a little different:
-p `
-v C:\myfiles:/notebooks/myfiles `
Let's break this command and go over the meaning of each part:
@@ -143,7 +143,7 @@ Let's break this command and go over the meaning of each part:
* **`microsoft/mmlspark:0.12`**
* **`microsoft/mmlspark:0.13`**
Finally, this specifies an explicit version tag for the image that we want to
@@ -26,7 +26,7 @@ to check availability in your data center.
MMLSpark provides an Azure Resource Manager (ARM) template to create a
default setup that includes an HDInsight cluster and a GPU machine for
training. The template can be found here:
It has the following parameters that configure the HDI Spark cluster and
the associated GPU VM:
@@ -48,16 +48,16 @@ the associated GPU VM:
- `gpuVirtualMachineSize`: The size of the GPU virtual machine to create
There are actually two additional templates that are used from this main template:
- [`spark-cluster-template.json`](
- [`spark-cluster-template.json`](
A template for creating an HDI Spark cluster within a VNet, including
MMLSpark and its dependencies. (This template installs MMLSpark using
the HDI script action:
- [`gpu-vm-template.json`](
- [`gpu-vm-template.json`](
A template for creating a GPU VM within an existing VNet, including
CNTK and other dependencies that MMLSpark needs for GPU training.
(This is done via a script action that runs
Note that these child templates can also be deployed independently, if
you don't need both parts of the installation. Particularly, to scale
@@ -69,7 +69,7 @@ GPU VM setup template at experimentation time.
### 1. Deploy an ARM template within the [Azure Portal](
[Click here to open the above main
in the Azure portal.
(If needed, you click the **Edit template** button to view and edit the
@@ -87,11 +87,11 @@ We also provide a convenient shell script to create a deployment on the
command line:
* Download the [shell
and make a local copy of it
* Create a JSON parameter file by downloading [this template
and modify it according to your specification.
You can now run the script — it takes the following arguments:
@@ -124,7 +124,7 @@ you for all needed values.
### 3. Deploy an ARM template with the MMLSpark Azure PowerShell
MMLSpark also provides a [PowerShell
to deploy ARM templates, similar to the above bash script. Run it with
`-?` to see the usage instructions (or use `get-help`). If needed,
install the Azure PowerShell cmdlets using the instructions in the

0 comments on commit ad20556

Please sign in to comment.