Skip to content

artificial neural networks, with the implementation of a multilayer perceptron

Notifications You must be signed in to change notification settings

tsannie/multilayer_perceptron

Repository files navigation

multilayer_perceptron 🧮

image

📝 Description

Welcome to my project of a MLP (Multilayer Perceptron) library developed from scratch, similar to Keras. This library allows you to create, train, and evaluate multilayer perceptron models.

For this project, I am using predictive data to diagnose cancer in cells, with M representing malignant and B representing benign

📦 Features

Sequential model

The Sequential model is a linear stack of layers.

    Sequential()

Methods:

  • add

    Add a layer to the model.

    def add(self, layer):
  • compile

    Configures the model for training.

    def compile(self, optimizer="rmsprop", loss=None, metrics=None):
  • fit

    Trains the model for a fixed number of epochs (iterations on a dataset).

    def fit(
        self,
        x=None,
        y=None,
        batch_size=None,
        epochs=1,
        callbacks=None,
        validation_split=0.0,
        validation_data=None,
        shuffle=True,
    ):
  • evaluate

    Returns the loss value & metrics values for the model.

    def evaluate(self, x=None, y=None, batch_size=None):
  • predict

    Generates output predictions for the input samples.

    def predict(self, x=None, batch_size=None):
  • save

    Saves the model to a json file.

    def save(self, path):
  • load

    Loads the model from a json file.

    def load(self, path):
  • summary

    Prints a summary of the model.

    def summary(self):

Layers

Layers are the basic building blocks of neural networks.

  • Dense
    Dense(
        n_neurons,
        activation="linear",
        input_dim=None,
        kernel_initializer="glorot_uniform",
        bias_initializer="zeros",
        kernel_regularizer=None,
        bias_regularizer=None,
    )

Activation Functions

Activation functions are mathematical equations that determine the output of a neural network. The function is attached to each neuron in the network, and determines whether it should be activated or not, based on whether each neuron's input is relevant for the model's prediction.

  • Linear
  • ReLU
  • Leaky ReLU
  • Sigmoid
  • Softmax
  • Softplus
  • Softsign
  • Tanh
  • Exponential

Initializers

Initializers define the way to set the initial random weights of neurons in a layer.

  • Random Normal
  • Random Uniform
  • Truncated Normal
  • Zeros
  • Ones
  • Glorot Normal
  • Glorot Uniform
  • He Normal
  • He Uniform
  • Identity

Optimizers

Optimizers play a crucial role by helping to minimize the loss function and find the optimal values for the model's parameters. They guide the learning process by adjusting the parameters iteratively based on the gradients, leading to improved accuracy and faster convergence during training.

Test for all optimizers with "random" structure of neural network:

python train_model.py -o data.csv

image

  • Adam
  • Stochastic Gradient Descent
  • Stochastic Gradient Descent with Nesterov Momentum
  • RMSprop

Loss Functions

Loss function measures the discrepancy between the predicted output of a model and the true output.

  • Binary Cross Entropy
  • Mean Squared Error

Callbacks

Callbacks are functions that can be applied at certain stages of the training process, such as at the end of each epoch.

  • Early Stopping

    The early stopping callback is used to stop the training process if the model stops improving on the validation data.

    EarlyStopping(monitor="loss", min_delta=0, patience=0, mode="min", start_from_epoch=0)
  • Model Checkpoint

    The model checkpoint callback is used to save the model after every epoch.

    ModelCheckpoint(filepath, monitor="val_loss", mode="min"):

Metrics

Metrics are used to evaluate the performance of your model.

Here an image of some metrics on a model trained:

image

  • Accuracy
  • Binary Accuracy
  • Precision
  • Recall
  • Mean Squared Error

📌 TODO

  • End regularization implementation

👨‍💻 Author


About

artificial neural networks, with the implementation of a multilayer perceptron

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages