-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do bias work at a mlp network? #6
Comments
Hi, according to the SNNS user manual bias: In contrast to other network simulators where the bias (threshold) Does this answer your question? Regards, On 14/07/2016 3:38 PM, Lucas Venezian Povoa wrote:
|
Hi Christoph, Thank you for attention. It answered my question, but I have another point to ask about. I implemented the feedforward step and applied the weights and bias of a mlp trained using RSNNS. $fullWeightMatrix
Input_p Input_q Input_lo Hidden_2_1 Hidden_2_2 Hidden_2_3 Hidden_2_4 Hidden_2_5 Hidden_2_6 Output_target
Input_p 0 0 0 -1.7307487 6.173544 1.265264 -5.3201742 0.9761125 6.169746 0.000000
Input_q 0 0 0 -1.8996859 6.228392 1.243880 -5.4633279 0.6697564 6.095452 0.000000
Input_lo 0 0 0 0.3186978 -5.617334 -4.962238 0.5265539 0.1827270 -5.520674 0.000000
Hidden_2_1 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -1.564677
Hidden_2_2 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 10.262736
Hidden_2_3 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -10.078181
Hidden_2_4 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -8.505977
Hidden_2_5 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 2.194988
Hidden_2_6 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -10.033887
Output_target 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 0.000000 $unitDefinitions
unitNo unitName unitAct unitBias type posX posY posZ actFunc outFunc sites
1 1 Input_p 0.000000e+00 0.16126040 UNIT_INPUT 1 0 0 Act_Identity Out_Identity
2 2 Input_q 0.000000e+00 -0.28681308 UNIT_INPUT 2 0 0 Act_Identity Out_Identity
3 3 Input_lo 1.000000e+00 -0.02067816 UNIT_INPUT 3 0 0 Act_Identity Out_Identity
4 4 Hidden_2_1 4.247330e-01 -0.62207133 UNIT_HIDDEN 1 2 0 Act_Logistic Out_Identity
5 5 Hidden_2_2 3.001675e-02 2.14181113 UNIT_HIDDEN 2 2 0 Act_Logistic Out_Identity
6 6 Hidden_2_3 9.972194e-01 10.84456730 UNIT_HIDDEN 3 2 0 Act_Logistic Out_Identity
7 7 Hidden_2_4 7.922895e-01 0.81222737 UNIT_HIDDEN 4 2 0 Act_Logistic Out_Identity
8 8 Hidden_2_5 5.398569e-01 -0.02296048 UNIT_HIDDEN 5 2 0 Act_Logistic Out_Identity
9 9 Hidden_2_6 7.310683e-05 -4.00284147 UNIT_HIDDEN 6 2 0 Act_Logistic Out_Identity
10 10 Output_target 3.699642e-06 3.45435309 UNIT_OUTPUT 1 4 0 Act_Logistic Out_Identity I noticed the |
Hi, unitAct is the current activation of that unit, i.e., its current The input units use Act_Identity, which doesn't have bias. If you want Regards, On 07/19/2016 08:12 AM, Lucas Venezian Povoa wrote:
|
Thank you, Christoph. |
I noticed that biases are not applied in a conventional manner in a Multilayer Perceptron model. It is not clear how the feedforward step applies bies between the layer and their neurons.
Please, could you explain how they work?
The text was updated successfully, but these errors were encountered: