-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Poor performance of mxnet LinearRegressionOutput #4287
Comments
For a network without hidden layer, the best performance will match result from lm. If you change optimizer to adam without fixed learning rate, you will get a reasonable outcome. For network with layer, I don't think nnet use active function, and also optimizer is a potential issue here. |
Many thanks for the response. Removing the fixed learning rate and changing to the adam optimizer was a big help. Results are attached. I have also included the performance of As you have stated the I don't understand your point about nnet not using active function. Is there any default regularisation in
|
It seems that using "rmsprop" for optimization offers a further improvement, as well as increasing the batch size. For reference pasted below is a version of the code and results in which mxnet performs well compared to Many thanks for the help. RMSE errors for the 5 models:
Plots of training fit (linear model results shown in green): And code that produces these:
|
I have been unable to get reasonable performance using mxnet
LinearRegressionOutput
layer.Full details of the problem including self-contained example I have given in the following SO question.
The question may seen rather broad (I'm getting poor performance), so the answer should perhaps be the obvious (do some hyper-parameter tuning). However given the simplicity of the regression problem considered, and the much better performance of other neural-net libraries "out-of-the-box" I thought this might be of general interest.
The text was updated successfully, but these errors were encountered: