-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[caffe2] replace refernces to np.asscalar (#121332) #121545
Conversation
Summary: `np.asscalar` was deprecated and removed in a recent Numpy. It used to be implemented the following way, and the recommended alternative is to call `item()` directly: ```lang=python def asscalar(a): return a.item() ``` This fixes all of the references. Test Plan: visual inspection and automated tests Differential Revision: D54697760
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/121545
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ca4a0b6 with merge base a656e12 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D54697760 |
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This PR is part of an effort to speed up torch.onnx.export (#121422). - The inputs (dynamic inputs and constants) do not change as as nodes are added and it is expensive to re-compute for every node. So, we cache this value so we avoid computing it for every node. Open to entirely other solution as well. - Resolves (5) in #121422. (partial fix of #121545) Pull Request resolved: #123028 Approved by: https://github.com/justinchuby
This PR is part of an effort to speed up torch.onnx.export (#121422). - For each node that is processed in onnx.export, a check is run to see if all inputs are "reliable" (static shape, etc.). This value does not change, so it is much faster to cache it on the first computation. The caching is added to the ConstantMap state. - Resolves (6) in #121422. - Also see #123028 with a similar addition of a cache state. (partial fix of #121545) Pull Request resolved: #124912 Approved by: https://github.com/justinchuby
This PR is part of an effort to speed up torch.onnx.export (pytorch#121422). - For each node that is processed in onnx.export, a check is run to see if all inputs are "reliable" (static shape, etc.). This value does not change, so it is much faster to cache it on the first computation. The caching is added to the ConstantMap state. - Resolves (6) in pytorch#121422. - Also see pytorch#123028 with a similar addition of a cache state. (partial fix of pytorch#121545) Pull Request resolved: pytorch#124912 Approved by: https://github.com/justinchuby
Summary:
np.asscalar
was deprecated and removed in a recent Numpy. It used to be implemented the following way, and the recommended alternative is to callitem()
directly:This fixes all of the references.
Test Plan: visual inspection and automated tests
Differential Revision: D54697760