Skip to content

dynamo(eval_frame.py) failed on Windows UT. #132561

@xuhancn

Description

@xuhancn

🐛 Describe the bug

Reproduce:
Setup env: #124245 (comment)

pytest -v test/quantization/pt2e/test_x86inductor_quantizer.py -k test_qat_conv2d

Error message:

Traceback (most recent call last):
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\unittest\case.py", line 591, in run
    self._callTestMethod(testMethod)
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\unittest\case.py", line 549, in _callTestMethod
    method()
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\testing\_internal\common_utils.py", line 2918, in wrapper
    method(*args, **kwargs)
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\testing\_internal\common_utils.py", line 1515, in wrapper
    fn(*args, **kwargs)
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\testing\_internal\common_quantization.py", line 399, in wrapper
    fn(*args, **kwargs)
  File "D:\xu_git\dnnl_cb\pytorch\test\quantization\pt2e\test_x86inductor_quantizer.py", line 1737, in test_qat_conv2d
    self._test_quantizer(
  File "D:\xu_git\dnnl_cb\pytorch\test\quantization\pt2e\test_x86inductor_quantizer.py", line 560, in _test_quantizer
    m = prepare_qat_pt2e(m, quantizer) if is_qat else prepare_pt2e(m, quantizer)
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\ao\quantization\quantize_pt2e.py", line 174, in prepare_qat_pt2e
    _fuse_conv_bn_qat(model)
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\ao\quantization\pt2e\qat_utils.py", line 625, in _fuse_conv_bn_qat
    m = _fuse_conv_bn_qat_helper(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\ao\quantization\pt2e\qat_utils.py", line 657, in _fuse_conv_bn_qat_helper
    match_pattern = _get_aten_graph_module_for_pattern(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\ao\quantization\pt2e\utils.py", line 355, in _get_aten_graph_module_for_pattern
    aten_pattern = capture_pre_autograd_graph(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\_export\__init__.py", line 143, in capture_pre_autograd_graph
    m = torch._dynamo.export(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\_dynamo\eval_frame.py", line 1551, in inner
    graph = rewrite_signature(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\_dynamo\eval_frame.py", line 1098, in rewrite_signature
    matched_output_elements_positions = produce_matching(
  File "C:\Users\Xuhan\.conda\envs\win_mkl_static\lib\site-packages\torch\_dynamo\eval_frame.py", line 1084, in produce_matching
    raise AssertionError(
AssertionError: Unexpectedly found a <class 'torch.Tensor'> in the outputs.
Please file an issue along with a paste of the logs from TORCH_LOGS="+export"

To execute this test, run the following from the base repo dir:
    python test\quantization\pt2e\test_x86inductor_quantizer.py -k TestQuantizePT2EX86Inductor.test_qat_conv2d

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
--------------------------------------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------------------------------------
W0803 08:01:22.074000 13136 torch\_export\__init__.py:64] +============================+
W0803 08:01:22.074000 13136 torch\_export\__init__.py:65] |     !!!   WARNING   !!!    |
W0803 08:01:22.075000 13136 torch\_export\__init__.py:66] +============================+
W0803 08:01:22.075000 13136 torch\_export\__init__.py:67] capture_pre_autograd_graph() is deprecated and doesn't provide any function guarantee moving forward.
W0803 08:01:22.075000 13136 torch\_export\__init__.py:68] Please switch to use torch.export instead.

Versions

Latest main branch code can reproduce the issue.
I'm working on enable torch.compile on Windows, and occured this issue, but I'm not a dynamo expert. So, I have to open this issue and ask help from community.

Collecting environment information...
PyTorch version: 2.5.0a0+giteb5883f
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: N/A

Python version: 3.10.14 | packaged by Anaconda, Inc. | (main, May  6 2024, 19:44:50) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 472.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=3303
DeviceID=CPU0
Family=207
L2CacheSize=10240
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3303
Name=11th Gen Intel(R) Core(TM) i9-11900KB @ 3.30GHz
ProcessorType=3
Revision=

Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] torch==2.5.0a0+git0738916
[pip3] torchvision==0.19.0a0+06ad737
[conda] mkl-include               2024.2.0                 pypi_0    pypi
[conda] mkl-static                2024.2.0                 pypi_0    pypi
[conda] numpy                     2.0.0                    pypi_0    pypi
[conda] torch                     2.5.0a0+gitff2008f          pypi_0    pypi
[conda] torchvision               0.19.0a0+06ad737          pypi_0    pypi

cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @rec @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4

Metadata

Metadata

Assignees

Labels

export-triagedThis tag is used to tag issues that have been looked by PT2 Export team and determined the next stepmodule: dynamomodule: inductormodule: windowsWindows support for PyTorchoncall: exportoncall: pt2oncall: quantizationQuantization support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions