Newest ML framework that you propbaly don't need,
this is really autograd engine backed by numpy
import tinytorch as tt #👀
def f(x):
return x**3 + x
x = tt.tensor((tt.arange(700) - 400)/100 , requires_grad=True)
z = f(x)
z.sum().backward()
print(x.grad)
import tinytorch as tt
def f(x,y):
return x**2 + x*y + (y**3+y) **0.5
x = tt.rand((5,5), requires_grad=True)
y = tt.rand((5,5), requires_grad=True)
z = f(x,y)
z.sum().backward()
print(x.grad)
print(y.grad)
python mnist.py
GPU=1 python mnist.py
note: numpy is too slow to train llm you need to install jax (just using it as faster numpy)
If you want to see your computation graph run visulize.py
requirements
pip install graphviz
sudo apt-get install -y graphviz # IDK what to do for windows I use wsl
Bcs I was bored
- Part 1: pythonstuff/build-tensors
- Part 2: pythonstuff/backward-pass
- Part 3: pythonstuff/refactor-&-cleanup
1.0 - karpathy micrograd (really simple, not much you can do with it)
3.14 - tinytorch (simpile and you can do lot of things with it) <= ❤️
69 - tinygrad (no longer simple you can do lot more)
∞ - pytorch (goat library, that makes gpu go burrr)
- be nice
- performance optimization / more examples welcome
- doc sources if any
- keep tinytorch.py under 1000 lines