This repository was archived by the owner on Aug 1, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 129
This repository was archived by the owner on Aug 1, 2025. It is now read-only.
[Inductor] [CPU] Low passrate in model bench with ShapeProp error #1978
Copy link
Copy link
Closed
Description
In WW50.4 TorchInductor CPU Performance Dashboard, we observed low passrate of model bench.
SW information
| SW | Nightly commit | Master/Main commit |
|---|---|---|
| Pytorch | c8ee46c | 26d1dbc |
| Torchbench | / | 2e5d723 |
| torchaudio | c44b576 | 8ba323b |
| torchtext | ebcfed5 | b3390fb |
| torchvision | d0f2888 | 5b4f79d |
| dynamo/benchmarks | db1da1f | 5266953 |
Error log: (See similar error in some models, take attention_is_all_you_need_pytorch as an example here)
cpu eval attention_is_all_you_need_pytorch Traceback (most recent call last):
File "/workspace/pytorch/torch/fx/passes/shape_prop.py", line 141, in run_node
result = super().run_node(n)
File "/workspace/pytorch/torch/fx/interpreter.py", line 171, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/workspace/pytorch/torch/fx/interpreter.py", line 243, in call_function
return target(*args, **kwargs)
File "/workspace/pytorch/torch/_inductor/overrides.py", line 37, in __torch_function__
return func(*args, **kwargs)
File "/workspace/pytorch/torch/_subclasses/fake_tensor.py", line 611, in __torch_dispatch__
return func(*args, **kwargs)
File "/workspace/pytorch/torch/_ops.py", line 285, in __call__
return self._op(*args, **kwargs or {})
File "/workspace/pytorch/torch/_subclasses/fake_tensor.py", line 787, in __torch_dispatch__
raise Exception(
Exception: Invoking operators with non-Fake Tensor inputs in FakeTensorMode is not yet supported. Please convert all Tensors to FakeTensors first. Found in aten.bitwise_and.Tensor(*(FakeTensor(FakeTensor(..., device='meta', size=(32, 1, 21), dtype=torch.bool), cpu), tensor([[[ True, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True]]])), **{})
ERROR:common:Failed for dynamo compile_fx raised RuntimeError: ShapeProp error for: node=%and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {}) with meta={'stack_trace': 'Module stack: {\'mod\': <class \'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer\'>}\n File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward\n trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)\n | File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass\n return mod(*inputs)\n'}
While executing %and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {})
Original traceback:
Module stack: {'mod': <class 'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer'>}
File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward
trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)
| File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass
return mod(*inputs)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
Traceback (most recent call last):
File "/workspace/pytorch/torch/fx/passes/shape_prop.py", line 141, in run_node
result = super().run_node(n)
File "/workspace/pytorch/torch/fx/interpreter.py", line 171, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/workspace/pytorch/torch/fx/interpreter.py", line 243, in call_function
return target(*args, **kwargs)
File "/workspace/pytorch/torch/_inductor/overrides.py", line 37, in __torch_function__
return func(*args, **kwargs)
File "/workspace/pytorch/torch/_subclasses/fake_tensor.py", line 611, in __torch_dispatch__
return func(*args, **kwargs)
File "/workspace/pytorch/torch/_ops.py", line 285, in __call__
return self._op(*args, **kwargs or {})
File "/workspace/pytorch/torch/_subclasses/fake_tensor.py", line 787, in __torch_dispatch__
raise Exception(
Exception: Invoking operators with non-Fake Tensor inputs in FakeTensorMode is not yet supported. Please convert all Tensors to FakeTensors first. Found in aten.bitwise_and.Tensor(*(FakeTensor(FakeTensor(..., device='meta', size=(32, 1, 21), dtype=torch.bool), cpu), tensor([[[ True, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, False, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, False,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
False, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, False, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, False, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, False, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, False, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, False, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, False,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
False],
[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True]]])), **{})
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 586, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/workspace/pytorch/torch/_dynamo/debug_utils.py", line 915, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs, **kwargs)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 360, in compile_fx
model_ = overrides.fuse_fx(model_, example_inputs_)
File "/workspace/pytorch/torch/_inductor/overrides.py", line 556, in fuse_fx
ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
File "/workspace/pytorch/torch/fx/passes/shape_prop.py", line 179, in propagate
return super().run(*args)
File "/workspace/pytorch/torch/fx/interpreter.py", line 130, in run
self.env[node] = self.run_node(node)
File "/workspace/pytorch/torch/fx/passes/shape_prop.py", line 146, in run_node
raise RuntimeError(
RuntimeError: ShapeProp error for: node=%and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {}) with meta={'stack_trace': 'Module stack: {\'mod\': <class \'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer\'>}\n File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward\n trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)\n | File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass\n return mod(*inputs)\n'}
While executing %and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {})
Original traceback:
Module stack: {'mod': <class 'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer'>}
File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward
trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)
| File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass
return mod(*inputs)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 1189, in warmup
fn(model, example_inputs)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 329, in catch_errors
return callback(frame, cache_size)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 470, in _convert_frame
result = inner_convert(frame, cache_size)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 102, in _fn
return fn(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/utils.py", line 90, in time_wrapper
r = func(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 395, in _compile
out_code = transform_code_object(code, transform)
File "/workspace/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 382, in transform
tracer.run()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1621, in run
super().run()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 485, in run
and self.step()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 454, in step
getattr(self, inst.opname)(inst)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1683, in RETURN_VALUE
self.output.compile_subgraph(self)
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 439, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 510, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 591, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: compile_fx raised RuntimeError: ShapeProp error for: node=%and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {}) with meta={'stack_trace': 'Module stack: {\'mod\': <class \'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer\'>}\n File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward\n trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)\n | File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass\n return mod(*inputs)\n'}
While executing %and_ : [#users=6] = call_function[target=operator.and_](args = (%unsqueeze_1, %to), kwargs = {})
Original traceback:
Module stack: {'mod': <class 'torchbenchmark.models.attention_is_all_you_need_pytorch.transformer.Models.Transformer'>}
File "/workspace/benchmark/torchbenchmark/models/attention_is_all_you_need_pytorch/transformer/Models.py", line 169, in forward
trg_mask = get_pad_mask(trg_seq, self.trg_pad_idx) & get_subsequent_mask(trg_seq)
| File "benchmarks/dynamo/torchbench.py", line 377, in forward_pass
return mod(*inputs)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
Metadata
Metadata
Assignees
Labels
No labels