-
Notifications
You must be signed in to change notification settings - Fork 383
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
Returning a tuple of torch tensors raises the following error:
RuntimeError: Method (but not graphs in general) require a single output. Use None/Tuple for 0 or 2+ outputs
This occurs despite already returning a tuple of torch tensors.
To Reproduce
import torch
from torch import nn
from torch.nn import functional as F
import torch_tensorrt as torchtrt
import torch_tensorrt.logging as logging
logging.set_reportable_log_level(logging.Level.Warning)
torch.manual_seed(0)
DEVICE = torch.device("cuda:0")
INPUT_SIZE = 1
OUTPUT_SIZE = 1
SHAPE = (INPUT_SIZE, INPUT_SIZE)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(INPUT_SIZE, OUTPUT_SIZE)
def forward(self, x):
o1 = F.softplus(x)
o2 = self.linear(x)
return (o1, o2)
if __name__ == "__main__":
x = torch.randn(SHAPE, dtype=torch.float32, device=DEVICE)
model = Model().eval().to(DEVICE)
(o1, o2) = model(x)
print(f"Model: {o1.shape} {o2.shape}")
model_trt = torchtrt.compile(
model,
inputs=[
torchtrt.Input(shape=SHAPE),
],
enabled_precisions={torch.float},
)
(o1, o2) = model_trt(x)
print(f"Model TRT: {o1.shape} {o2.shape}")
Outputs the following:
root@65641c126568:/workspace# python /scripts/softplus.py
Model: torch.Size([1, 1]) torch.Size([1, 1])
WARNING: [Torch-TensorRT] - Cannot infer input type from calcuations in graph for input x.1. Assuming it is Float32. If not, specify input type explicity
ERROR: [Torch-TensorRT] - Unsupported operator: aten::softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> (Tensor)
File "/scripts/softplus.py", line 25
def forward(self, x):
o1 = F.softplus(x)
~~~~~~~~~~ <--- HERE
o2 = self.linear(x)
ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
- aten::softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> (Tensor)
You can either implement converters for these ops in your application or request implementation
https://www.github.com/nvidia/Torch-TensorRT/issues
In Module:
ERROR: [Torch-TensorRT] - Unsupported operator: aten::softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> (Tensor)
File "/scripts/softplus.py", line 25
def forward(self, x):
o1 = F.softplus(x)
~~~~~~~~~~ <--- HERE
o2 = self.linear(x)
WARNING: [Torch-TensorRT] - Input type for doing shape analysis could not be determined, defaulting to F32
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected invalid timing cache, setup a local cache instead
Traceback (most recent call last):
File "/scripts/softplus.py", line 45, in <module>
(o1, o2) = model_trt(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: Method (but not graphs in general) require a single output. Use None/Tuple for 0 or 2+ outputs
Expected behavior
Output should not error, and should print out shapes of returned tensors.
Environment
Ubuntu 18.04 x86-64
- v1.0 using TRT NGC 21.11-py3: Issue appears in this environment
- Master (11bcb98) using TRT NGC 22.02-py3: Issue does not appear in this environment
Additional context
The issue appears to be related to the F.softplus(...) call here, and not with returning tuples in general, as changing this call out for something else and returning the resulting tuple of tensors will run without issues. Eg. the following implementation will work:
def forward(self, x):
o1 = self.linear(x)
o2 = self.linear(x)
return (o1, o2)
Thus, this issue appears to be aliasing a deeper issue elsewhere in the compilation process.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working