Skip to content

Commit

Permalink
Inclusion of ANN method: fixing the last PR
Browse files Browse the repository at this point in the history
  • Loading branch information
ggurioli committed Mar 3, 2021
1 parent b8ab736 commit a6373b6
Showing 1 changed file with 18 additions and 17 deletions.
35 changes: 18 additions & 17 deletions ezyrb/ann.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ class ANN(Approximation):
:param list layers: ordered list with the number of neurons of each hidden layer.
:param torch.nn.modules.activation function: activation function at each layer,
except for the output layer at with Identity is considered by default.
A single activaction function can be passed or a list of them of length
equal to the number of hidden layers.
except for the output layer at with Identity is considered by default.
A single activation function can be passed or a list of them of length
equal to the number of hidden layers.
:param list stop_training: list with the maximum number of training iterations
(int) and/or the desired tolerance on the training loss (float).
(int) and/or the desired tolerance on the training loss (float).
:param torch.nn.Module loss: loss definition (Mean Squared if not given).
Example:
Expand All @@ -35,7 +35,8 @@ class ANN(Approximation):

def __init__(self, layers, function, stop_training, loss=None):

if loss is None: loss = torch.nn.MSELoss()
if loss is None:
loss = torch.nn.MSELoss()

if not isinstance(function, list): # Single activation function passed
function = [function] * (len(layers))
Expand Down Expand Up @@ -71,14 +72,14 @@ def _convert_torch_to_numpy(self, tensor):
"""
return tensor.detach().numpy()

def _build_model(self,points,values):
def _build_model(self, points, values):
"""
Build the torch model.
Considering the number of neurons per layer (self.layers), a
feed-forward NN is defined:
- activation function from layer i>=0 to layer i+1: self.function[i];
activation function at the output layer: Identity (by default).
- activation function from layer i>=0 to layer i+1: self.function[i];
activation function at the output layer: Identity (by default).
:param numpy.ndarray points: the coordinates of the given (training) points.
:param numpy.ndarray values: the (training) values in the points.
Expand All @@ -98,15 +99,15 @@ def fit(self, points, values):
Build the ANN given 'points' and 'values' and perform training.
Training procedure information:
- optimizer: Adam's method with default parameters
(see, e.g., https://pytorch.org/docs/stable/optim.html);
- loss: self.loss (if none, the Mean Squared Loss is set by default).
- stopping criterion: the fulfillment of the requested tolerance on the
training loss compatibly with the prescribed budget of training
iterations (if type(self.stop_training) is list); if type(self.stop_training)
is int or type(self.stop_training) is float, only the number of maximum
iterations or the accuracy level on the training loss is considered
as the stopping rule, respectively.
- optimizer: Adam's method with default parameters
(see, e.g., https://pytorch.org/docs/stable/optim.html);
- loss: self.loss (if none, the Mean Squared Loss is set by default).
- stopping criterion: the fulfillment of the requested tolerance on the
training loss compatibly with the prescribed budget of training
iterations (if type(self.stop_training) is list); if type(self.stop_training)
is int or type(self.stop_training) is float, only the number of maximum
iterations or the accuracy level on the training loss is considered
as the stopping rule, respectively.
:param numpy.ndarray points: the coordinates of the given (training) points.
:param numpy.ndarray values: the (training) values in the points.
Expand Down

0 comments on commit a6373b6

Please sign in to comment.