New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY][REFACTOR] Mix mode context analysis #6403
Conversation
cc @mbrookhart @masahi @icemelon9 @jroesch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much for working on this!
I need this fix for a new PyTorch frontend test I'm working on, so merging now. Thanks @zhiics @mbrookhart @leandron |
* mix mode context analysis * add uses_gpu decorator for more tests * revert visit counter * relax visit limit * lint * bump visit limit to 19 * typo
* mix mode context analysis * add uses_gpu decorator for more tests * revert visit counter * relax visit limit * lint * bump visit limit to 19 * typo
* mix mode context analysis * add uses_gpu decorator for more tests * revert visit counter * relax visit limit * lint * bump visit limit to 19 * typo
https://discuss.tvm.apache.org/t/vm-an-error-from-context-analysis-pass/7818
This PR uses the MixModeVisitor to leverage its non-recursive visitor and memoization. We bumped the visit limit to allow high order functions to be visited multiple times which enables pytorch lstm tests. More powerful unification needs to be added for these functions.
In addition, the use_gpu decorator is added to many more unit tests.