Skip to content

The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi-layer perceptron (MLP) 4-2-4 encoder.

License

Notifications You must be signed in to change notification settings

yuanbo-peng/Neural-Networks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Neural Networks

The EBP Training Algorithm for an MLP Encoder

The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi- layer perceptron (MLP) 4-2-4 encoder using MatLab. Intuitively the structure of the encoder is as shown below:

  • An input layer with 4 units.
  • A single hidden layer with 2 units.
  • An output layer with 4 units.

Each unit has a sigmoid activation function. The task of the encoder is to map the following inputs onto outputs:

Input Pattern Output Pattern
1, 0, 0, 0 1, 0, 0, 0
0, 1, 0, 0 0, 1, 0, 0
0, 0, 1, 0 0, 0, 1, 0
0, 0, 0, 1 0, 0, 0, 1

Activation Functions

Activation functions are used for a neural network to learn and make sense of some data complicated and Non-linear complex functional mappings between the inputs and response variables. There are several commonly used activation functions to fit different data types better, such as Sigmoid, Tanh, and ReLu etc. In this case, the sigmoid function would be applied.

Total Error Calculation

A training set consists of

  • A set of input vectors 𝑖1, ..., 𝑖N, where the dimension of 𝑖n is equal to the number of MLP input units.
  • For each 𝑛, a target vector 𝑡n, where the dimension of 𝑡n is equal to the number of output units.

The error 𝐸 is defined by:

Weights Modification

Let the weights between input and hidden layer, hidden and output layer be two sets of matrices 𝑊1, 𝑊2. The size of these two matrices are 4 × 2, 2 × 4. The values in these two matrices are automatically generated. Each value in 𝑊2 and 𝑊1 needs to be updated after each iteration of forward propagation.

Update W2 (the weights between hidden and output layer)

The new weights between hidden and output layer are calculated by:

Update W1 (the weights between input and hidden layer)

The new weights between input and hidden layer are calculated by:

An Improved EBP Training Algorithm

Bias is a constant which helps the model in a way that it can fit better for the given data. A bias unit is an ‘extra’ neuron which doesn’t have any incoming connections added to pre-output layer.

Evaluation: Bias vs Non-bias

The MLP parameters are below:

  • Learning rate: 6.0
  • Number of iterations: 1000
  • Initial weights in two systems are equal

About

The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi-layer perceptron (MLP) 4-2-4 encoder.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages