-
Couldn't load subscription status.
- Fork 25.7k
Move shape and operand definitions to base node #75223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move shape and operand definitions to base node #75223
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 11aabbf (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
| hash = HashCombine(hash, static_cast<uint64_t>(kNullOpt)); | ||
| continue; | ||
| } | ||
| auto operand_hash = bakeInSizes ? operand.hash_with_sizes() : operand.hash_without_sizes(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JackCaoG I don't think the 'hash_with_sizes/without_sizes` here is a big deal since they are APIs in Node:: already and that's the bigger issue. OK with you to land this and then clean up those methods separately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wonjoolee95
We actually have own version of GetOperandHashes. I think the constructor is a bit messy right now as we explicitly overwrite the constructor to take xla::shape and we currently ignore lazy::shape.
Node::Node(OpKind op, OpList operands, std::vector<Shape>&& shapes,
size_t num_outputs, hash_t hash_seed)
will not affect us until we adapt codegen and start passing lazy::shape around. I am OK with this change now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing that out. That is correct, this should be a no-op from XLA's side.
|
Thanks! LGTM from PT/XLA's side. I can follow-up with removing some of the code on XLA's side once this PR merges. cc @JackCaoG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Pytorch/XLA can have a follow up pr to clean up our Node.
|
@wconstab has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: First stage of breaking up #74710 Moves the shape and operand definitions from `TsNode` to the base `Node` CC: wconstab JackCaoG henrytwo Partially Fixes #74628 Pull Request resolved: #75223 Reviewed By: zou3519 Differential Revision: D35410285 Pulled By: wconstab fbshipit-source-id: bb84d3fb636882cbe7e18af4b35ff2c0e22aaa58
|
Hey @antoniojkim. |
First stage of breaking up #74710
Moves the shape and operand definitions from
TsNodeto the baseNodeCC: @wconstab @JackCaoG @henrytwo
Partially Fixes #74628