-
Notifications
You must be signed in to change notification settings - Fork 129
Fix FX graph stack traceback issues #1655
Conversation
what's the difference between this new stack of tx's that you're adding to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm
def fake_mode(self): | ||
return self.root_tx.fake_mode | ||
|
||
def push_tx(self, tx): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, you went with the stack after all :) glad it worked
def push_tx(self, tx): | ||
self._current_tx.append(tx) | ||
|
||
def pop_tx(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: idiomatic for pop to return what it popped
return result | ||
|
||
def unpack_var_sequence(self, tx): | ||
def unpack_var_sequence_range(self, tx, range): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I wonder if we eventually need to do this for all unpacks? As in, make them all take range or provide a default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm about to switchover to core, lets hold off on landing
We have migrated More details and instructions to port this PR over can be found in #1588 |
The output graph only keeps a reference to a root (first) tx, not the most current one. The tx stack replaces the current_tx kwarg, since passing around a translator object as a kwarg is quite prone to errors. |
Migration from pytorch/torchdynamo#1655. Pull Request resolved: #87136 Approved by: https://github.com/voznesenskym
Migration from pytorch/torchdynamo#1655. Pull Request resolved: #87136 Approved by: https://github.com/voznesenskym
Migration from pytorch/torchdynamo#1655. Pull Request resolved: pytorch#87136 Approved by: https://github.com/voznesenskym
Fixes #1167 and related bugs where the current translator object is not properly given to FX graph nodes.
Example: https://gist.github.com/williamwen42/1efa7ce86b7ac797e90a13356668d90d
(Note: running the above script before this change requires this modification to the code:
Running after this change requires setting
torchdynamo.config.output_graph_code = True
).Running
python benchmarks/torchbench.py --only BERT_pytorch --performance --verbose
now gives: