You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2021-07-23 11:37:56.623 | INFO | __main__:main:52 - loaded checkpoint done.
/home/chen/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
[TensorRT] ERROR: ../rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
[TensorRT] ERROR: ../rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
2021-07-23 11:38:01.598 | ERROR | __main__:<module>:77 - An error has been caught in function '<module>', process 'MainProcess' (17632), thread 'MainThread' (139750195443456):
Traceback (most recent call last):
> File "tools/trt.py", line 77, in <module>
main()
└ <function main at 0x7f18545ff488>
File "tools/trt.py", line 64, in main
torch.save(model_trt.state_dict(), os.path.join(file_name, 'model_trt.pth'))
│ │ │ │ │ │ │ └ './YOLOX_outputs/yolox_s'
│ │ │ │ │ │ └ <function join at 0x7f1a20b3d378>
│ │ │ │ │ └ <module 'posixpath' from '/usr/lib/python3.6/posixpath.py'>
│ │ │ │ └ <module 'os' from '/usr/lib/python3.6/os.py'>
│ │ │ └ <function Module.state_dict at 0x7f1864626620>
│ │ └ TRTModule()
│ └ <function save at 0x7f18e8d24bf8>
└ <module 'torch' from '/home/chen/.virtualenvs/py36/lib/python3.6/site-packages/torch/__init__.py'>
File "/home/chen/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1261, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
│ │ │ │ └ {'version': 1}
│ │ │ └ ''
│ │ └ OrderedDict()
│ └ TRTModule()
└ <function TRTModule._on_state_dict at 0x7f1863b1df28>
File "/home/chen/.virtualenvs/py36/lib/python3.6/site-packages/torch2trt-0.3.0-py3.6-linux-x86_64.egg/torch2trt/torch2trt.py", line 436, in _on_state_dict
state_dict[prefix + "engine"] = bytearray(self.engine.serialize())
│ │ │ └ None
│ │ └ TRTModule()
│ └ ''
└ OrderedDict()
AttributeError: 'NoneType' object has no attribute 'serialize'
@ruinmessi i am getting the following error
File "/usr/local/lib/python3.8/dist-packages/torch2trt-0.3.0-py3.8-linux-x86_64.egg/torch2trt/torch2trt.py", line 291, in wrapper
outputs = method(*args, **kwargs)
│ │ └ {}
│ └ (tensor([[[[ 8.0270e-02, -9.2515e-02, -9.2515e-02, ..., -9.2515e-02,
│ -9.2515e-02, -1.7056e-01],
│ [ 1.298...
└ <built-in method batch_norm of type object at 0x7fc51a261ce0>
RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 1.96 GiB total capacity; 426.84 MiB already allocated; 4.88 MiB free; 464.00 MiB reserved in total by PyTorch)
My configuration is:
ubuntu 18.04
pytroch 1.8.0
cuda11.1
cudnn 8.04
TensorRT 7.2.3
torch2trt 0.3.0
Hope to get your help, thanks!
您好,当我使用tools/trt.py试图将pth.tar文件转化为TensorRT标准文件时出现了一些错误,如下:
我的配置是:
ubuntu 16.04
pytroch 1.9.0
cuda11.1
cudnn 8.04
TensorRT 7.2.1.6
torch2trt 0.3.0
希望能够得到您的帮助,感谢!
The text was updated successfully, but these errors were encountered: