Skip to content

Not Efficient but Great to Learn Neural Network

Notifications You must be signed in to change notification settings

omaraflak/neglnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 

Repository files navigation

NEGLNN

Not Efficient but Great to Learn Neural Network

Example

import numpy as np
from neglnn.layers.dense import Dense
from neglnn.activations.tanh import Tanh
from neglnn.losses.mse import MSE
from neglnn.initializers.normal import Normal
from neglnn.optimizers.momentum import Momentum
from neglnn.network.network import Network, BlockBuilder

X = np.reshape([[0, 0], [0, 1], [1, 0], [1, 1]], (4, 2, 1))
Y = np.reshape([[0], [1], [1], [0]], (4, 1, 1))

network = Network.create([
    BlockBuilder(Dense(2, 3), Normal(), lambda: Momentum()),
    BlockBuilder(Tanh()),
    BlockBuilder(Dense(3, 1), Normal(), lambda: Momentum()),
    BlockBuilder(Tanh())
])

network.fit(X, Y, MSE(), 1000)

print(network.predict_all(X))

About

Not Efficient but Great to Learn Neural Network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages