-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[ONNX] Do not use numpy
in ONNX opsets
#65188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
As `numpy` is optional dependency, we should avoid using in PyTorch core Replace `torch.tensor([numpy.arange(a, b, c)])` with `torch.arange(a, b, c).unsqueeze(0)` Replace `tuple(numpy.add(a, b))` with `tuple( x + y for (x, y) in zip(a, b)`
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 974b240 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Codecov Report
@@ Coverage Diff @@
## master #65188 +/- ##
==========================================
- Coverage 66.37% 66.34% -0.03%
==========================================
Files 732 732
Lines 93630 93684 +54
==========================================
+ Hits 62144 62156 +12
- Misses 31486 31528 +42 |
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Replace
torch.tensor([numpy.arange(a, b, c)])
withtorch.arange(a, b, c).unsqueeze(0)
Replace
tuple(numpy.add(a, b))
withtuple( x + y for (x, y) in zip(a, b)
As
numpy
is an optional dependency, it shouldn't be used in PyTorch core by default