You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Online learning for classification using stochastic gradient descent (SGD).
33
+
* Returns an accumulator function which incrementally performs binary classification using stochastic gradient descent (SGD).
34
34
*
35
35
* ## Method
36
36
*
37
-
* The sub-gradient of the loss function is estimated for each datum and the classification model is updated incrementally, with a decreasing learning rate and regularization of the feature weights based on L2 regularization.
37
+
* - The sub-gradient of the loss function is estimated for each datum and the classification model is updated incrementally, with a decreasing learning rate and regularization of model feature weights using L2 regularization.
38
+
* - Stochastic gradient descent is sensitive to the scaling of the features. One is advised to either scale each feature to `[0,1]` or `[-1,1]` or to transform each feature into z-scores with zero mean and unit variance. One should keep in mind that the same scaling has to be applied to training data in order to obtain accurate predictions.
39
+
* - In general, the more data provided to an accumulator, the more reliable the model predictions.
* @param options.learningRate - string denoting the learning rate to use. Can be `constant`, `pegasos`, or `basic` (default: 'basic')
48
-
* @param options.loss - string denoting the loss function to use. Can be `hinge`, `log`, `modifiedHuber`, `perceptron`, or `squaredHinge` (default: 'log')
49
-
* @param options.intercept - boolean indicating whether to include an intercept (default: true)
0 commit comments