Skip to content

Commit

Permalink
Added documentation and example for StreamingLR
Browse files Browse the repository at this point in the history
  • Loading branch information
freeman-lab committed Aug 20, 2014
1 parent 217b5e9 commit 05a1139
Showing 1 changed file with 74 additions and 0 deletions.
74 changes: 74 additions & 0 deletions docs/mllib-linear-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -518,6 +518,80 @@ print("Mean Squared Error = " + str(MSE))
</div>
</div>

## Streaming linear regression

When data arrive in a streaming fashion, it is useful to fit regression models online,
updating the parameters of the model as new data arrive. MLlib currently supports
streaming linear regression using ordinary least squares. The fitting is similar
to that performed offline, except fitting occurs on each batch of data, so that
the model continually updates to reflect the data from the stream.

### Examples

The following example demonstrates how to load training and testing data from two different
input streams of text files, parse the streams as labeled points, fit a linear regression model
online to the first stream, and make predictions on the second stream.

<div class="codetabs">

<div data-lang="scala" markdown="1">

First, we import the necessary classes for parsing our input data and creating the model.

{% highlight scala %}

import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD

{% endhighlight %}

Then we make input streams for training and testing data. We assume a Streaming Context `ssc`
has already been created, see [Spark Streaming Programming Guide](streaming-programming-guide.html#initializing)
for more info. For this example, we use labeled points in training and testing streams,
but in practice you will likely want to use unlabeled Vectors for test data.

{% highlight scala %}

val trainingData = ssc.textFileStream('/training/data/dir').map(LabeledPoint.parse)
val testData = ssc.textFileStream('/testing/data/dir').map(LabeledPoint.parse)

{% endhighlight %}

We create our model by initializing the weights to 0

{% highlight scala %}

val model = new StreamingLinearRegressionWithSGD()
.setInitialWeights(Vectors.zeros(3))

{% endhighlight %}

Now we register the streams for training and testing and start the job.
Printing predictions alongside true labels lets us easily see the result.

{% highlight scala %}

model.trainOn(trainingData)
model.predictOnValues(testData.map(lp => (lp.label, lp.features))).print()

ssc.start()
ssc.awaitTermination()

{% endhighlight %}

We can now save text files with data to the training or testing folders.
Each line should be a data point formatted as `(y,[x1,x2,x3])` where `y` is the label
and `x1,x2,x3` are the features. Anytime a text file is placed in `/training/data/dir`
the model will update. Anytime a text file is placed in `/testing/data/dir` you will see predictions.
As you feed more data to the training directory, the predictions
will get better!

</div>

</div>


## Implementation (developer)

Behind the scene, MLlib implements a simple distributed version of stochastic gradient descent
Expand Down

0 comments on commit 05a1139

Please sign in to comment.