You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
then when used in convolution layers, APL learns not C_sums paramers, but H_W_C (== count/num) * sums parameters. So same channel in different locations does not share same APL coefficients. => APL with sums = 2 has h_w more parametrs than PReLU.
Is it intentional or typo?
The text was updated successfully, but these errors were encountered:
then when used in convolution layers, APL learns not C_sums paramers, but
H_W_C (== count/num) * sums parameters. So same channel in different
locations does not share same APL coefficients. => APL with sums = 2 has h_w
more parametrs than PReLU.
Is it intentional or typo?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub #4
Hi,
If I understand source code correctly
https://github.com/forestagostinelli/Learned-Activation-Functions-Source/blob/master/src/caffe/layers/apl_layer.cpp#L71
and
https://github.com/forestagostinelli/Learned-Activation-Functions-Source/blob/master/src/caffe/layers/apl_layer.cpp#L30
then when used in convolution layers, APL learns not C_sums paramers, but H_W_C (== count/num) * sums parameters. So same channel in different locations does not share same APL coefficients. => APL with sums = 2 has h_w more parametrs than PReLU.
Is it intentional or typo?
The text was updated successfully, but these errors were encountered: