Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

❓ [Question] Is there support for optional arguments in model's forward()? #772

Closed
lhai37 opened this issue Dec 14, 2021 · 6 comments
Closed
Assignees
Labels
component: core Issues re: The core compiler No Activity question Further information is requested

Comments

@lhai37
Copy link

lhai37 commented Dec 14, 2021

❓ Question

Is there support for optional arguments in model's forward()? For example, I have the following: def forward(self, x, y: Optional[Tensor] = None): where y is an optional tensor. The return result is x + y if y is provided, otherwise just x.

What you have already tried

I added a second torch_tensorrt.Input() in the input spec, then at inference time got the error:
Expected dimension specifications for all input tensors, but found 1 input tensors and 2 dimension specs

I then removed the Optional annotation and just pass in None or the actual tensor for y. When None is passed in, I got the error: RuntimeError: forward() Expected a value of type 'Tensor' for argument 'input_1' but instead found type 'NoneType'.

I also tried passing in just 1 argument for x, and got:
RuntimeError: forward() is missing value for argument 'input_1'

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • PyTorch Version (e.g., 1.0): 1.10.0+cu113
  • CPU Architecture:
  • OS (e.g., Linux): Ubuntu 18.04
  • How you installed PyTorch (conda, pip, libtorch, source): pip
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version: 3.7.11
  • CUDA version: 11.1
  • GPU models and configuration: Tesla V100 with 32GB memory
  • Any other relevant information:

Additional context

@lhai37 lhai37 added the question Further information is requested label Dec 14, 2021
@peri044
Copy link
Collaborator

peri044 commented Dec 19, 2021

@lhai37 I don't think we support optional tensors at the moment. cc @narendasan. We expect inputs and outputs of a module to be torch::Tensors. Can you share how your torchscript model looks like ? I tried to convert the following to TS but torch.jit.script fails on this

class Optional(torch.nn.Module):

    def __init__(self):
        super(Optional, self).__init__()

    def forward(self, x, y: Optional[torch.Tensor] = None):
        return x + y

model = Optional()
scripted_model = torch.jit.script(model)

@ProGamerGov
Copy link

@peri044 Your code fails because you are doing x + None. If you add an if statement to prevent that, then it will work in TorchScript.

@github-actions
Copy link

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

@narendasan narendasan added the component: core Issues re: The core compiler label May 18, 2022
@narendasan narendasan self-assigned this May 18, 2022
@narendasan
Copy link
Collaborator

This is not currently supported and requires the next phase collections feature (#629). The issue is we need to be able to generate the torchscript code to manage mapping from function input to tensorrt input when potentially any arbitrary input could be None.

@github-actions
Copy link

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

@github-actions
Copy link

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: core Issues re: The core compiler No Activity question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants