You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the implementation of the LearnNSE available in the MOA 2017.06, there is a chance for the sbkt variable to get really close to zero, leading to the computation of the log of infinity when calculating the ensemble weights.
Dear @ALL,
In the implementation of the LearnNSE available in the MOA 2017.06, there is a chance for the sbkt variable to get really close to zero, leading to the computation of the log of infinity when calculating the ensemble weights.
this.ensembleWeights.add(Math.log(1.0 / sbkt));
This leaded to problems in the Gaussian problem suggested by the original author of the Learn++.NSE: http://users.rowan.edu/~polikar/research/NSE/
As a workaround, one of the original authors of the NSE check if the "sbkt" is smaller than 0.01. If so, the value is set to 0.01.
It can be seen in: https://github.com/gditzler/IncrementalLearning/blob/master/src/learn_nse.m
Check the condition:
if net.beta(net.t,net.t)<net.threshold,
net.beta(net.t,net.t) = net.threshold;
end
It seems to solve the problem when implemented in the MOA version of the LearnNSE.
Best regars.
The text was updated successfully, but these errors were encountered: