Skip to content
mathemaphysics edited this page Jul 21, 2018 · 2 revisions

Welcome to the neural wiki!

Introduction

This is an experimental code. Eventually it should implement a decent incarnation of both supervised and unsupervised learning in deep neural networks. The primary language being used here is C, but I'm hoping to see it branch out into everything else: From C++14 and Objective C to D (seriously) and Go. I'm particularly fond of Go lately, so hopefully we can expand in that direction.

Algorithms and methodology

My fondest hope is to develop a much more useful version of Geoffrey Hinton's unsupervised conrastive divergence method used in unsupervised learning procedures in generative neural networks. In this case I intend to deal with restricted Boltzmann machines (RBMs), structurally similar to the standard deep learning networks but with a limited layer-to-layer connectivity. The goal is to allow a stochastic generative RBM to do object classification without supervision. The stochastic nature of the network comes from the way in which the system is sampled. Gibbs sampling is the main component of this. And it also lies at the heart of the reason RBMs are generative networks. This is why the contrastive divergence method is used: Contrastive divergence lends itself naturally to stochastic sampling to produce an approximation of the gradient of a representative energy of the network.

Generative neural networks

Backpropagation and other supervised learning methods