Skip to content
Sourangshu Ghosh edited this page Jul 8, 2020 · 2 revisions

# Welcome to the Evolutionary-Deep-Neural-Network wiki!

Several gradient-based methods have been developed for Artificial Neural Network (ANN) training. Still, in some situations, such procedures may lead to local minima, making Evolutionary Algorithms (EAs) a promising alternative. In this work, EAs using direct representations are applied to several classification and regression ANN learning tasks. Furthermore, EAs are also combined with local optimization, under the Lamarckian framework. Both strategies are compared with conventional training methods. The results reveal an enhanced performance by a macro-mutation based Lamarckian approach.

Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.

Clone this wiki locally