Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upmlpML tuneGrid checking implementation flawed #829
Comments
|
There is a bug in the code but not for what you show. It is supposed to stop execution if the second layer has zero units since that is nonsensical (to me, at least). The intention is to convert your specification above to be |
|
It looks like mxnet treats c(5,0,7) as c(5,7) so it would be consistent with that. On the other hand, the neuralnet model seems to treat c(5,0,7) as c(5), so there's not existing consensus among models. I'm fairly new to the field so I don't have an intuition for how the community would want to interpret it, so I'd defer to your opinion. |
|
How does that look? |
|
That looks perfect! Clears up any ambiguity with a good warning message. |
The mlpML model treats a nn size definition of c(5,0,7) as the same as (5), ignoring anything following an empty layer, but not producing an error. The mlpML caret implementation has checking to avoid this in line 28 of mlpML.R, however, the logic is incorrectly implemented.
if(param$layer2 == 0 & param$layer2 > 0) stop("the second layer must have at least one hidden unit if a third layer is specified")should be
if(param$layer2 == 0 & param$layer3 > 0) stop("the second layer must have at least one hidden unit if a third layer is specified")Minimal, runnable code:
Session Info:
Thanks!