Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] disable size optimizations for onnx #36243

Closed
wants to merge 22 commits into from

Conversation

@BowenBao
Copy link
Contributor

BowenBao commented Apr 8, 2020

Reviving this PR #35401 @eellison. I believe after the profiled graph executor fix the test failures are handled.

@BowenBao BowenBao requested a review from apaszke as a code owner Apr 8, 2020
@dr-ci
Copy link

dr-ci bot commented Apr 8, 2020

💊 CircleCI build failures summary and remediations

As of commit c00c00c (more details on the Dr. CI page):


None of the build failures appear to be your fault 💚


  • 1/1 broken upstream at merge base 72b55fe on Apr 07 from 4:37pm to 9:09pm PDT (5 commits; 373dc7c - e2f9c66)

    Please rebase on the viable/strict branch (expand for instructions)

    If your commit is newer than viable/strict, you can try basing on an older, stable commit:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase --onto FETCH_HEAD $(git merge-base origin/master HEAD)
    

    If your commit is older than viable/strict:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase FETCH_HEAD
    

    Check out the recency history of this "viable master" tracking branch.


🚧 1 upstream failure:

These were probably caused by upstream breakages:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 2 times.

@gchanan gchanan requested a review from eellison Apr 8, 2020
@gchanan gchanan added the triaged label Apr 8, 2020
@eellison
Copy link
Contributor

eellison commented Apr 8, 2020

PR looks fine but it looks like in a separate PR we're adding more constant propagation at export time of shape information - #35386 . I would image we don't want both this PR & that one too land.

@BowenBao
Copy link
Contributor Author

BowenBao commented Apr 9, 2020

@eellison the ONNX constant folding is different, the scope is smaller. Since it is export time optimization, we can only fold things that we know won't change, thus only constant nodes are folded. e.g. onnx::Constant[value_t=...]. The jit optimization here happens for any tensor with shape info.

@eellison
Copy link
Contributor

eellison commented Apr 9, 2020

@eellison the ONNX constant folding is different, the scope is smaller. Since it is export time optimization, we can only fold things that we know won't change, thus only constant nodes are folded. e.g. onnx::Constant[value_t=...]. The jit optimization here happens for any tensor with shape info.

So the traced graph for the example looks like:

graph(%self : __torch__.ShapeModule,
      %x : Double(2, 5)):
  %9 : Tensor = prim::GetAttr[name="weight"](%self)
  %4 : int = prim::Constant[value=0]() # test/test_jit.py:11115:0
  %5 : int = aten::size(%9, %4) # test/test_jit.py:11115:0
  %shape : Long() = prim::NumToTensor(%5)
  %7 : int = prim::Constant[value=1]() # test/test_jit.py:11116:0
  %8 : Double(2, 5) = aten::add(%x, %shape, %7) # test/test_jit.py:11116:0
  return (%8)

how does the %9 : Tensor = prim::GetAttr(...) get lowered ?

Copy link
Contributor

eellison left a comment

LGTM

Copy link
Contributor

facebook-github-bot left a comment

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 10, 2020

@eellison merged this pull request in 48bf3ee.

ashishfarmer added a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
Summary:
Reviving this PR pytorch#35401 eellison. I believe after the profiled graph executor fix the test failures are handled.
Pull Request resolved: pytorch#36243

Differential Revision: D20950623

Pulled By: eellison

fbshipit-source-id: 5fbee426d1a098d84d5938540d45ce00828299be
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

5 participants
You can’t perform that action at this time.