Skip to content

luo3300612/MyAutoGrad

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is autogradient implememted by myself just for fun!

To ask for grad, just use A.grad(B) , which can give you partial A partial B

Philosophy:Efficiency is the last thing I will consider.

core

core

example

Other usages

>>> mat1 = Mat([[1, 2], [2, 3], [2, math.e]])
>>> format(op.log(mat1), '.3f')
['0.000', '0.693']
['0.693', '1.099']
['0.693', '1.000']

Log

Nov 5

  • refactor project structure
  • add unittest

Nov 6

  • .T() -> .T
  • add MatMulInternalError
  • add NN for exclusive-or problem

Nov 7

  • remove children of Mat and Node
  • optimize zero_grad to simplify computation graph
  • add node scalar operation, make Mat scalar operation depend on it
  • overload > < of Node
  • add softmax
  • fix zero_grad error

ISSUE

Since Node and Mat both implement __roper__ method, we will face error when we do Mat + Node or Node + Mat

TODO

  • singledispatch
  • combine Node.zero_grad and Mat.zero_grad
  • fix some awful feature
  • add flag require_grad to simplify computation graph
  • fix format
  • random

Thought

  • Node-based gradient -> Mat-based gradient

Mat-Based implement

  • use numpy avoid overloading
  • use require_grad
  • calculate gradient in forward step for require_grad = 1

About

autograd based on python

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages