Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception in torch.jit.script doesn't indicate where in the code the problem lies. #78486

Open
BrettRyland opened this issue May 30, 2022 · 2 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue

Comments

@BrettRyland
Copy link

馃悰 Describe the bug

In the following exception caused by

RuntimeError: Attempted to use Dict without contained types. Please add contained type, e.g. Dict[int, int]

there's no indication of where in the model/code the problem lies, only that there's a Dict without contained types somewhere:

In [6]: scripted_model = run_trace(S,args)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 scripted_model = run_trace(S,args)

File ~/repos/Autosensor/NN/deploy/trace_model.py:35, in run_trace(S, args)
     33 with torch.inference_mode():
     34         if args.script:
---> 35                 traced_module = torch.jit.script(S.model)
     36         else:
     37                 example = torch.rand(1, 3, *S.model.input_size).to(S.device)

File ~/.local/lib/python3.10/site-packages/torch/jit/_script.py:1265, in script(obj, optimize, _frames_up, _rcb, example_inputs)
   1263 if isinstance(obj, torch.nn.Module):
   1264     obj = call_prepare_scriptable_func(obj)
-> 1265     return torch.jit._recursive.create_script_module(
   1266         obj, torch.jit._recursive.infer_methods_to_compile
   1267     )
   1269 if isinstance(obj, dict):
   1270     return create_script_dict(obj)

File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:454, in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
    452 if not is_tracing:
    453     AttributeTypeIsSupportedChecker().check(nn_module)
--> 454 return create_script_module_impl(nn_module, concrete_type, stubs_fn)

File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:520, in create_script_module_impl(nn_module, concrete_type, stubs_fn)
    518 # Compile methods if necessary
    519 if concrete_type not in concrete_type_store.methods_compiled:
--> 520     create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
    521     # Create hooks after methods to ensure no name collisions between hooks and methods.
    522     # If done before, hooks can overshadow methods that aren't exported.
    523     create_hooks_from_stubs(concrete_type, hook_stubs, pre_hook_stubs)

File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:371, in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
    368 property_defs = [p.def_ for p in property_stubs]
    369 property_rcbs = [p.resolution_callback for p in property_stubs]
--> 371 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)

File ~/.local/lib/python3.10/site-packages/torch/jit/annotations.py:350, in try_ann_to_type(ann, loc)
    348 if a is None:
    349     inner.append(NoneType.get())
--> 350 maybe_type = try_ann_to_type(a, loc)
    351 msg = "Unsupported annotation {} could not be resolved because {} could not be resolved."
    352 assert maybe_type, msg.format(repr(ann), repr(maybe_type))

File ~/.local/lib/python3.10/site-packages/torch/jit/annotations.py:321, in try_ann_to_type(ann, loc)
    319     if elem_type:
    320         return ListType(elem_type)
--> 321 if is_dict(ann):
    322     key = try_ann_to_type(ann.__args__[0], loc)
    323     value = try_ann_to_type(ann.__args__[1], loc)

File ~/.local/lib/python3.10/site-packages/torch/_jit_internal.py:880, in is_dict(ann)
    878 def is_dict(ann) -> bool:
    879     if ann is Dict:
--> 880         raise_error_container_parameter_missing("Dict")
    882     if not hasattr(ann, '__module__'):
    883         return False

File ~/.local/lib/python3.10/site-packages/torch/_jit_internal.py:1101, in raise_error_container_parameter_missing(target_type)
   1099 def raise_error_container_parameter_missing(target_type) -> None:
   1100     if target_type == 'Dict':
-> 1101         raise RuntimeError(
   1102             "Attempted to use Dict without "
   1103             "contained types. Please add contained type, e.g. "
   1104             "Dict[int, int]"
   1105         )
   1106     raise RuntimeError(
   1107         f"Attempted to use {target_type} without a "
   1108         "contained type. Please add a contained type, e.g. "
   1109         f"{target_type}[int]"
   1110     )

RuntimeError: Attempted to use Dict without contained types. Please add contained type, e.g. Dict[int, int]

It would be really helpful if the exception could indicate the location of the problematic dictionary.

Versions

Collecting environment information...
PyTorch version: 1.11.0a0+gitbc2c6ed
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.10.4 (main, Apr  2 2022, 09:04:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] numpy-quaternion==2022.4.1
[pip3] torch==1.11.0
[pip3] torch-tensorrt==1.2.0a0+e9e824c0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.12.0
[conda] Could not collect
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 30, 2022
@BrettRyland
Copy link
Author

Note: I eventually found the Dict, but wasted half a day in doing so.

@Gamrix
Copy link
Contributor

Gamrix commented Jun 2, 2022

Can you include a sample repro of this error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

No branches or pull requests

3 participants