Skip to content

thomberg1/UniversalFunctionApproximation

Repository files navigation

Universal Function Approximation by Neural Nets

The universal function approximation property of multilayer perceptrons was first noted by Cybenko (1989) and Hornik (1991): George Cybenko (1989), “Approximations by superpositions of sigmoidal functions”, Mathematics of Control, Signals, and Systems Kurt Hornik (1991), “Approximation Capabilities of Multilayer Feedforward Networks”, Neural Networks

“The universal approximation theorem states that a feed-forward network with a single hidden layer containing a finite number of neurons (i.e., a multilayer perceptron), can approximate continuous functions on compact subsets of Rn, under mild assumptions on the activation function. “

“The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.”

This repository contains Jupyter notebooks that use neural networks to learn arbitrary Python lambda expressions and WaveLets.

Alt text

Warren S. Sarle, SAS Institute Inc., Cary, NC, USA, ftp://ftp.sas.com/pub/neural/dojo/dojo.html

"The benchmark data sets used by neural net and machine learning researchers tend to have many inputs, either no noise or lots of noise, and little to moderate nonlinearity. A very different set of benchmarks has become popular in the statistical literature based on several articles by Donoho and Johnstone (1994; 1995; Donoho, Johnstone, Kerkyacharian, and Picard 1995)...",

Alt text

About

Universal Function Approximation by Neural Nets

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages