-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example code doesn't show actual tensor #453
Comments
This commit resolves issue tinygrad#453 In the example code in the README.md, when it is run, it prints for Tiny Grad the tensors as: <Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None> <Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None> But to be equivalent to the output of the Torch example, we need to use numpy() to get it to show: [[ 2. 2. 2.] [ 0. 0. 0.] [-2. -2. -2.]] [[1. 1. 1.]]
That's on purpose! If you really want that behavior simply change the repr to: |
Can you suggest an alternative code change to README to make it clear that the example requires you to define how the Tensor is realised to be equivalent output to Torch. I think I may have totally missed the point of the example in the README. |
Sure they output different things, but as you said that's not really the point. The point is to show how similar tinygrad code is compared to pytorch, thus no need to learn a whole different syntax to write some ml stuff ^^ |
This commit resolves issue #453 In the example code in the README.md, when it is run, it prints for Tiny Grad the tensors as: <Tensor <LB (3, 3) op:MovementOps.RESHAPE> with grad None> <Tensor <LB (1, 3) op:MovementOps.RESHAPE> with grad None> But to be equivalent to the output of the Torch example, we need to use numpy() to get it to show: [[ 2. 2. 2.] [ 0. 0. 0.] [-2. -2. -2.]] [[1. 1. 1.]]
When you run the README.md example for tiny grad:
It prints out
but it should be printed out via numpy. e.g. use
x.grad.numpy()
andy.grad.numpy()
to get outputThe text was updated successfully, but these errors were encountered: