-
Notifications
You must be signed in to change notification settings - Fork 372
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
Compilation can be done when I fallback a model with multiple outputs. However, the following problems will occur when doing inference with the optimized model:
Traceback (most recent call last):
File "test_insert.py", line 35, in <module>
r(torch.ones([1,12,224,224]).cuda())
File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: Method (but not graphs in general) require a single output. Use None/Tuple for 0 or 2+ outputs
To Reproduce
Just run:
import torch
import torch.nn as nn
from pylib_torch import torch_tensorrt as tt
class Model(nn.Module):
def __init__(self):
super(Model,self).__init__()
self.conv1=nn.Conv2d(3,3,3)
self.conv2=nn.Conv2d(3,3,3)
self.conv3=nn.Conv2d(3,3,3)
self.conv4=nn.Conv2d(3,3,3)
def forward(self,x):
x1=x[:,0:3]
x2=x[:,3:6]
x3=x[:,6:9]
x4=x[:,9:12]
out1=self.conv1(x1)
out2=self.conv2(x2)
out3=self.conv3(x3)
out4=self.conv4(x4)
return out1,out2,out3,out4
a=Model().cuda().eval()
b=torch.jit.trace(a,torch.ones([1,12,20,20]).cuda())
torch.jit.save(b,'model.ts')
compile_settings = {}
compile_settings["inputs"] = [tt.Input(shape = [1,12,20,20])]
compile_settings["torch_executed_ops"]=['aten::slice']
r=tt.compile(b,**compile_settings)
r(torch.ones([1,12,224,224]).cuda())
Expected behavior
The model compiles and infers correctly.
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version : 1.0.0
- PyTorch Version: 1.10.0
- CPU Architecture: Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz
- OS (e.g., Linux): CentOS 7
- How you installed PyTorch (
conda
,pip
,libtorch
, source): pip - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.6.8
- CUDA version: 11.4
- GPU models and configuration: A30
- Any other relevant information:
Additional context
I have located this bug in torch_tensorrt::core::ConstructFallbackGraph and fixed it locally. After confirming the existence of this bug, I will open a PR to fix it.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working