Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example code doesn't show actual tensor #453

Closed
faisalmemon opened this issue Jan 5, 2023 · 3 comments
Closed

Example code doesn't show actual tensor #453

faisalmemon opened this issue Jan 5, 2023 · 3 comments

Comments

@faisalmemon
Copy link
Contributor

faisalmemon commented Jan 5, 2023

When you run the README.md example for tiny grad:

from tinygrad.tensor import Tensor

x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad)  # dz/dx
print(y.grad)  # dz/dy

It prints out

<Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None>
<Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None>

but it should be printed out via numpy. e.g. use x.grad.numpy() and y.grad.numpy() to get output

[[ 2.  2.  2.]
 [ 0.  0.  0.]
 [-2. -2. -2.]]
[[1. 1. 1.]]
faisalmemon added a commit to faisalmemon/tinygrad that referenced this issue Jan 5, 2023
This commit resolves issue tinygrad#453

In the example code in the README.md, when it is run, it prints for Tiny
Grad the tensors as:
<Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None>
<Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None>

But to be equivalent to the output of the Torch example, we need
to use numpy() to get it to show:
[[ 2.  2.  2.]
 [ 0.  0.  0.]
 [-2. -2. -2.]]
[[1. 1. 1.]]
@JJJJJJJJJJJJJJJl
Copy link

That's on purpose! If you really want that behavior simply change the repr to: def __repr__(self): return str(self.numpy()) for example.

@faisalmemon
Copy link
Contributor Author

Can you suggest an alternative code change to README to make it clear that the example requires you to define how the Tensor is realised to be equivalent output to Torch. I think I may have totally missed the point of the example in the README.

@JJJJJJJJJJJJJJJl
Copy link

Sure they output different things, but as you said that's not really the point. The point is to show how similar tinygrad code is compared to pytorch, thus no need to learn a whole different syntax to write some ml stuff ^^

geohot pushed a commit that referenced this issue Jan 9, 2023
This commit resolves issue #453

In the example code in the README.md, when it is run, it prints for Tiny
Grad the tensors as:
<Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None>
<Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None>

But to be equivalent to the output of the Torch example, we need
to use numpy() to get it to show:
[[ 2.  2.  2.]
 [ 0.  0.  0.]
 [-2. -2. -2.]]
[[1. 1. 1.]]
@geohot geohot closed this as completed Jan 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants