Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Adding 'numel' and 'to' export for script module #36501

Closed

Conversation

@neginraoof
Copy link
Collaborator

neginraoof commented Apr 13, 2020

These two ops are needed for torchvision model export. Since we're scripting a part of the code for dynamic export of models (in pytorch/vision#2052), these two changes are requited.

neginraoof added 30 commits Mar 24, 2020
…oof/interpolate

# Conflicts:
#	torch/onnx/symbolic_opset12.py
…oof/interpolate
…oof/interpolate
…oof/interpolate
…nto neraoof/interpolate
…oof/interpolate
neginraoof added 7 commits Apr 9, 2020
@dr-ci
Copy link

dr-ci bot commented Apr 13, 2020

💊 Build failures summary and remediations

As of commit 7e3091c (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 23 times.

def numel(g, self):
shape = g.op("Shape", self)
dtype = shape.type().scalarType()
if dtype is None: # dtype could be None for a script module

This comment has been minimized.

Copy link
@BowenBao

BowenBao Apr 13, 2020

Contributor

shape is just created above, I think here dtype would always None.

On the other hand, according to onnx spec, Shape operator always output tensor of type Long, why do we need to insert the cast node?

This comment has been minimized.

Copy link
@neginraoof

neginraoof Apr 13, 2020

Author Collaborator

I was seeing the ONNXRuntime error for kernel not implemented, and I thought this missing type is the issue. But that's correct, it should not be neede.
This PR needs to wait until microsoft/onnxruntime#3507 is merged then.

@lara-hdr
Copy link
Member

lara-hdr commented Apr 14, 2020

@neginraoof, it seems like ReduceProd is not available in ort for all opsets (it's erroring out with "Failed to find kernel for ReduceProd" for opsets 7-9). Could you disable the tests for the older versions and/or request the kernel implementation?

@neginraoof
Copy link
Collaborator Author

neginraoof commented Apr 17, 2020

@lara-hdr Thanks for reviewing this. I update onnxruntime ReduceProd, waiting for nightly build (it's stuck), I need to update ort-nightly version after that.

neginraoof added 5 commits Apr 21, 2020
…aoof/pytorch into neraoof/scriptingFixesTorchvision
…oof/scriptingFixesTorchvision

# Conflicts:
#	torch/csrc/jit/passes/onnx/function_substitution.h
@neginraoof
Copy link
Collaborator Author

neginraoof commented Apr 22, 2020

cc @houseroad for review. Thanks.

1 similar comment
@neginraoof
Copy link
Collaborator Author

neginraoof commented Apr 27, 2020

cc @houseroad for review. Thanks.

Copy link
Contributor

facebook-github-bot left a comment

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

BowenBao left a comment

LGTM

@neginraoof
Copy link
Collaborator Author

neginraoof commented Apr 28, 2020

Thanks @BowenBao
@houseroad if there are no internal failures, can we merge this please?
Thanks.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 28, 2020

@houseroad merged this pull request in 5809288.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

6 participants
You can’t perform that action at this time.