-
Notifications
You must be signed in to change notification settings - Fork 4
Some tests on toy examples #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is neat! The only comment I have (not for this PR) is that perhaps we can wire np.random.seed
to torch.set_seed
together with a torch.warn_once
saying "careful, the random numbers are not the same!". This would allow us to not use _np
at all, and still run these models. The point here is that here they just set a seed for reproducibility, but any seed (and hopefully any generator) will work just fine.
|
@honno could you give me a hand at excluding everything under EDIT: figured it out. |
(binary subtract operator on boolean arrays)
OK, a first fix to Here's a minimal reproducer:
Note that in PyTorch this is specific to advanced indexing:
So maybe this can be considered a bug in PyTorch @lezcano ? |
Not a bug, as it throws a clean error, so this was accounted for. The fix LGTM. |
@lezcano I wonder if you could clarify something for me? calling
|
Ugh, yeah, that's possible. On CPU, when you have 8 or more elements, vectorisation kicks in. It looks like the vectorisation implementation of |
@ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. [ghstack-poisoned]
ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. ghstack-source-id: 96a8b77 Pull Request resolved: #99550
Ok, we have four examples, which all basically work (in eager mode). Let's merge it, to have a link to add to the RFC. |
This is great! Could you add a short readme, similar to the OP, in the folder? |
Is this what you meant: #120 |
ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. Fixes #53958 #48486 cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 [ghstack-poisoned]
ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. ghstack-source-id: c3ca417 Pull Request resolved: #99550
I noticed your Mandelbrot example has some timeit benchmarking, was there anything interesting to report on performance differences? |
Nothing really interesting beyond "sloth slow in eager mode". How slow exactly I did not time, would be more interesting when integrated with torch.dynamo, I guess. And it's not mine really, all credit goes to Nicholas Rougier :-). |
@ev-br found in Quansight-Labs/numpy_pytorch_interop#117 (comment) that the precision of `abs()` for large values in the vectorised case is less-than-good. This PR fixes this issue. While doing that, we are able to comment out a few tests on extremal values. Fixes #53958 #48486 Pull Request resolved: #99550 Approved by: https://github.com/ngimel, https://github.com/peterbell10
Try testing our torch_np wrapper on real-world examples. Several toy problems first:
Several examples from N. Rougier, From python to numpy, https://www.labri.fr/perso/nrougier/from-python-to-numpy/
Build a maze and find a path; :
https://github.com/rougier/from-python-to-numpy/blob/master/code/maze_numpy.py
Simulate a diffusion/advection spreading of a smoke ring: https://github.com/rougier/from-python-to-numpy/blob/master/code/smoke_1.py
Construct / draw the Mandelbrot fractal: https://github.com/rougier/from-python-to-numpy/blob/master/code/mandelbrot_numpy_1.py
The strategy is to replace
import numpy as np
withimport torch_np as np
(i.e., in eager mode). See e2e/tests.md for some notes.