Skip to content

Conversation

ZolotukhinM
Copy link

@ZolotukhinM ZolotukhinM commented Aug 17, 2020

Stack from ghstack:

Differential Revision: D23178228

@dr-ci
Copy link

dr-ci bot commented Aug 17, 2020

💊 CI failures summary and remediations

As of commit d317823 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 14 times.

Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we failing instead of bailing now?

@ZolotukhinM
Copy link
Author

Why are we failing instead of bailing now?

Because the logic is that we are merging new nodes into an existing fusion group - all the nodes in the fusion group have to be supported, that's an invariant (we start from a fusible node and only add other fusible nodes there). This check should not be needed at all, but since we've had it before I replaced it with an assert.

Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed the check in canMerge is redundant, I don't know about removing the output shapes check


bool allShapesAreKnown(Node* node) {
// TODO: Relax the checks to support dynamic shapes
for (Value* output : node->outputs()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just because you know the input shapes doesnt mean you know the output shapes:

def index(i: int):
    x = torch.tensor(1, 2, 3, 4)
    out = x[i]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does for the subset of ops NNC supports though, at least currently. When we add more complicated ops, we would need to adjust the fuser logic as well, but as it stands now, I think we always can infer output shapes when we know input shapes.

// Symbolic checks
REQ(canHandle(producer));
REQ((canHandle(consumer) || consumer->kind() == getTensorExprSymbol()));
AT_ASSERT(canHandle(consumer) || consumer->kind() == getTensorExprSymbol());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: AT_ASSERT -> TORCH_INTERNAL_ASSERT

Mikhail Zolotukhin added 2 commits August 19, 2020 13:21
…(output shapes can be inferred)."

Differential Revision: [D23178228](https://our.internmc.facebook.com/intern/diff/D23178228)

[ghstack-poisoned]
…(output shapes can be inferred)."

Differential Revision: [D23178228](https://our.internmc.facebook.com/intern/diff/D23178228)

[ghstack-poisoned]
…(output shapes can be inferred)."

Differential Revision: [D23178228](https://our.internmc.facebook.com/intern/diff/D23178228)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@ZolotukhinM merged this pull request in 8dc4b41.

@facebook-github-bot facebook-github-bot deleted the gh/ZolotukhinM/293/head branch August 29, 2020 14:15
loadbxh pushed a commit to loadbxh/Torch that referenced this pull request Sep 23, 2020
…pes can be inferred).

ghstack-source-id: ea617d8
Pull Request resolved: pytorch/pytorch#43171
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants