New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different activation functions #27
Comments
I'm currently working on implementing tanH in the recurrent branch: Currently is 90% or so working (weights aren't zeroing out and result in infinity after a few runs) and will be done soon (maybe a week or two?) I think after I make headway there, I can address this. |
Just fyi, have not forgotten. A minor surgery and a bunch of planning later, this is still on the board. |
Also, pull requests are always welcome. |
Booya! https://github.com/harthur-org/brain.js/tree/other-activation-functions |
Would you be so kind as to give a code review and any insights? |
example: const net = new brain.NeuralNetwork({
activation: 'leaky-relu'
}); |
Hey,
first of, cool work :-)! As far as I see it brain.js is currently using a sigmoid/logistic activation function. I think it would be quite beneficial to.
I could implement some of them like tanH which are working in the range of (0,1) which is currently used as output.
The text was updated successfully, but these errors were encountered: