Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do bias work at a mlp network? #6

Closed
lucasvenez opened this issue Jul 14, 2016 · 4 comments
Closed

How do bias work at a mlp network? #6

lucasvenez opened this issue Jul 14, 2016 · 4 comments

Comments

@lucasvenez
Copy link

lucasvenez commented Jul 14, 2016

I noticed that biases are not applied in a conventional manner in a Multilayer Perceptron model. It is not clear how the feedforward step applies bies between the layer and their neurons.

Please, could you explain how they work?

@cbergmeir
Copy link
Owner

Hi,

according to the SNNS user manual
(http://www.ra.cs.uni-tuebingen.de/downloads/SNNS/SNNSv4.2.Manual.pdf),
bias is doing the following:

bias: In contrast to other network simulators where the bias (threshold)
of a unit is simulated by a link weight from a special 'on'-unit, SNNS
represents it as a unit parameter. In the standard version of SNNS the
bias determines where the activation function has its steepest ascent.
(see e.g. the activation function Act logistic).
Learning procedures like backpropagation change the bias of a unit like
a weight during training.

Does this answer your question?

Regards,
Christoph

On 14/07/2016 3:38 PM, Lucas Venezian Povoa wrote:

I noticed that bias are not applied in a conventional manner in a
Multilayer Perceptron model. It is not clear how the feedforward step
applies bies between the layer and their neurons.

Please, could you explain how they work?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#6, or mute the thread
https://github.com/notifications/unsubscribe/AAf_Gsj4pcDDffvk6ciH25EuJJ2HcBBRks5qVcs5gaJpZM4JMGSw.

@lucasvenez
Copy link
Author

Hi Christoph,

Thank you for attention. It answered my question, but I have another point to ask about. I implemented the feedforward step and applied the weights and bias of a mlp trained using RSNNS.

$fullWeightMatrix
              Input_p Input_q Input_lo Hidden_2_1 Hidden_2_2 Hidden_2_3 Hidden_2_4 Hidden_2_5 Hidden_2_6 Output_target
Input_p             0       0        0 -1.7307487   6.173544   1.265264 -5.3201742  0.9761125   6.169746      0.000000
Input_q             0       0        0 -1.8996859   6.228392   1.243880 -5.4633279  0.6697564   6.095452      0.000000
Input_lo            0       0        0  0.3186978  -5.617334  -4.962238  0.5265539  0.1827270  -5.520674      0.000000
Hidden_2_1          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000     -1.564677
Hidden_2_2          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000     10.262736
Hidden_2_3          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000    -10.078181
Hidden_2_4          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000     -8.505977
Hidden_2_5          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000      2.194988
Hidden_2_6          0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000    -10.033887
Output_target       0       0        0  0.0000000   0.000000   0.000000  0.0000000  0.0000000   0.000000      0.000000
$unitDefinitions
   unitNo      unitName      unitAct    unitBias        type posX posY posZ      actFunc      outFunc sites
1       1       Input_p 0.000000e+00  0.16126040  UNIT_INPUT    1    0    0 Act_Identity Out_Identity      
2       2       Input_q 0.000000e+00 -0.28681308  UNIT_INPUT    2    0    0 Act_Identity Out_Identity      
3       3      Input_lo 1.000000e+00 -0.02067816  UNIT_INPUT    3    0    0 Act_Identity Out_Identity      
4       4    Hidden_2_1 4.247330e-01 -0.62207133 UNIT_HIDDEN    1    2    0 Act_Logistic Out_Identity      
5       5    Hidden_2_2 3.001675e-02  2.14181113 UNIT_HIDDEN    2    2    0 Act_Logistic Out_Identity      
6       6    Hidden_2_3 9.972194e-01 10.84456730 UNIT_HIDDEN    3    2    0 Act_Logistic Out_Identity      
7       7    Hidden_2_4 7.922895e-01  0.81222737 UNIT_HIDDEN    4    2    0 Act_Logistic Out_Identity      
8       8    Hidden_2_5 5.398569e-01 -0.02296048 UNIT_HIDDEN    5    2    0 Act_Logistic Out_Identity      
9       9    Hidden_2_6 7.310683e-05 -4.00284147 UNIT_HIDDEN    6    2    0 Act_Logistic Out_Identity      
10     10 Output_target 3.699642e-06  3.45435309 UNIT_OUTPUT    1    4    0 Act_Logistic Out_Identity 

I noticed the unitAct is not used in the feedforward step and that biases associated to the input units are not used. But why biases for the input units are calculated? And what is unitAct role?

@cbergmeir
Copy link
Owner

Hi,

unitAct is the current activation of that unit, i.e., its current
output. That obviously changes with each pattern/vector of inputs, so it
is probably the activation of the last pattern that was used.

The input units use Act_Identity, which doesn't have bias. If you want
to use bias, you'll have to use Act_IdentityPlusBias.

Regards,
Christoph

On 07/19/2016 08:12 AM, Lucas Venezian Povoa wrote:

Hi Christoph,

Thank you for attention. It answered my question, but I have another
point to ask about. I implemented the feedforward step and applied the
weights and bias of a mlp trained using RSNNS.

$fullWeightMatrix
Input_p Input_q Input_lo Hidden_2_1 Hidden_2_2 Hidden_2_3 Hidden_2_4 Hidden_2_5 Hidden_2_6 Output_target
Input_p 0 0 0 -1.7307487 6.173544 1.265264 -5.3201742 0.9761125 6.169746 0.000000
Input_q 0 0 0 -1.8996859 6.228392 1.243880 -5.4633279 0.6697564 6.095452 0.000000
Input_lo 0 0 0 0.3186978 -5.617334 -4.962238 0.5265539 0.1827270 -5.520674 0.000000
Hidden_2_1 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -1.564677
Hidden_2_2 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 10.262736
Hidden_2_3 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -10.078181
Hidden_2_4 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -8.505977
Hidden_2_5 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 2.194988
Hidden_2_6 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 -10.033887
Output_target 0 0 0 0.0000000 0.000000 0.000000 0.0000000 0.0000000 0.000000 0.000000

$unitDefinitions
unitNo unitName unitAct unitBias type posX posY posZ actFunc outFunc sites
1 1 Input_p 0.000000e+00 0.16126040 UNIT_INPUT 1 0 0 Act_Identity Out_Identity
2 2 Input_q 0.000000e+00 -0.28681308 UNIT_INPUT 2 0 0 Act_Identity Out_Identity
3 3 Input_lo 1.000000e+00 -0.02067816 UNIT_INPUT 3 0 0 Act_Identity Out_Identity
4 4 Hidden_2_1 4.247330e-01 -0.62207133 UNIT_HIDDEN 1 2 0 Act_Logistic Out_Identity
5 5 Hidden_2_2 3.001675e-02 2.14181113 UNIT_HIDDEN 2 2 0 Act_Logistic Out_Identity
6 6 Hidden_2_3 9.972194e-01 10.84456730 UNIT_HIDDEN 3 2 0 Act_Logistic Out_Identity
7 7 Hidden_2_4 7.922895e-01 0.81222737 UNIT_HIDDEN 4 2 0 Act_Logistic Out_Identity
8 8 Hidden_2_5 5.398569e-01 -0.02296048 UNIT_HIDDEN 5 2 0 Act_Logistic Out_Identity
9 9 Hidden_2_6 7.310683e-05 -4.00284147 UNIT_HIDDEN 6 2 0 Act_Logistic Out_Identity
10 10 Output_target 3.699642e-06 3.45435309 UNIT_OUTPUT 1 4 0 Act_Logistic Out_Identity

I noticed the |unitAct| is not used in the feedforward step and that
biases associated to the input units are not used. But why biases for
the input units are calculated? And what is |unitAct| role?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#6 (comment), or
mute the thread
https://github.com/notifications/unsubscribe-auth/AAf_GmyKBNaqLILY3BeeWcpTeMm8566Xks5qW_o-gaJpZM4JMGSw.

@lucasvenez
Copy link
Author

Thank you, Christoph.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants