Skip to content

Commit

Permalink
Added steepness parameter to ANN
Browse files Browse the repository at this point in the history
  • Loading branch information
gasagna committed Oct 28, 2014
1 parent fc6b276 commit db4ce37
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 4 deletions.
4 changes: 2 additions & 2 deletions README.md
Expand Up @@ -45,9 +45,9 @@ Artificial Neural Network
-------------------------
Now we create a neural network, i.e. a multi-layer perceptron.

net = ANN([5, 5, 1], [:sigmoid_symmetric, :linear]; b=0.1, errorfunc=:tanh)
net = ANN([5, 5, 1], [:sigmoid_symmetric, :linear]; b=0.1, errorfunc=:tanh, steepness=1.0)

The first input is an array of Ints, with the number of nodes in each of the network layers. A bias node is also present in each layer except for the last one (see FANN documentation). The second input is an array of `n_layers-1` symbols that specifies the type of activation of the nodes in each layer except for the first one, which is always linear. Available activation functions are documented in src/constants.jl. The third parameter `b` is a float that specifies the half-width of the interval around zero over which random initial values for the network weights are drawn. The last argument to `ANN` is a symbol that specifies the type of error function used for training, it can be either `:tanh` (default) or `:linear`.
The first input is an array of Ints, with the number of nodes in each of the network layers. A bias node is also present in each layer except for the last one (see FANN documentation). The second input is an array of `n_layers-1` symbols that specifies the type of activation of the nodes in each layer except for the first one, which is always linear. Available activation functions are documented in src/constants.jl. The third parameter `b` is a float that specifies the half-width of the interval around zero over which random initial values for the network weights are drawn. The fourth argument to `ANN` is a symbol that specifies the type of error function used for training, it can be either `:tanh` (default) or `:linear`. The last parameter specifies the steepness of the activation functions of each layer, except the input layer.

The network can be trained as

Expand Down
9 changes: 7 additions & 2 deletions src/ann.jl
Expand Up @@ -14,7 +14,7 @@ type ANN
finalizer(ann, destroy)
ann
end
function ANN(layers::Vector{Int}, activation::Vector{Symbol}; b::Float64=0.1, errorfunc::Symbol=:tanh)
function ANN(layers::Vector{Int}, activation::Vector{Symbol}; b::Float64=0.1, errorfunc::Symbol=:tanh, steepness::Float64=1.0)
# Artificial Neural Network type
#
# Parameters
Expand All @@ -23,6 +23,7 @@ type ANN
# activation : array of symbols with activation function for each layer excluded the input layer
# b : [-b, b] defines the range for random initialisation of the weights
# errorfunc : the error function used for training
# steepness : The value of the steepness for all layers

# activation function for hidden and output layers
length(activation) == length(layers) - 1 || error("wrong dimension of activation function vector ")
Expand All @@ -35,12 +36,16 @@ type ANN
if ann == C_NULL
error("Error in fann_create_standard_array")
end
# set activation function for each layer
# set activation function and steepness for each layer
for layer = 1:length(layers)-1
ccall((:fann_set_activation_function_layer, libfann),
Void,
(Ptr{fann}, fann_activationfunc_enum, Cint),
ann, act2uint(activation[layer]), layer)
ccall((:fann_set_activation_steepness_layer, libfann),
Void,
(Ptr{fann}, fann_type, Cint),
ann, steepness, layer)
end
# randomize weights
ccall((:fann_randomize_weights, libfann),
Expand Down

0 comments on commit db4ce37

Please sign in to comment.