-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
馃悰 [Bug] SpecViolationError: Node.meta _to_copy_default is missing val field when using unsqueeze #2799
Labels
bug
Something isn't working
Comments
Notably, the below code works: import torch
import torch_tensorrt
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, topk_ind):
gather_index = topk_ind.unsqueeze(-1)
return gather_index
def main():
model = Model().cuda()
model.eval()
topk_ind = torch.randint(8400, size=(1, 300)).cuda()
inputs = [
torch_tensorrt.Input(topk_ind.shape, dtype=torch.int32),
]
enabled_precisions = {torch.half, torch.float32}
trt_model = torch_tensorrt.compile(
model,
inputs=inputs,
enabled_precisions=enabled_precisions,
truncate_long_and_double=True,
min_block_size=1,
)
if __name__ == "__main__":
main() which suggests it's not the import torch
import torch_tensorrt
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.num_queries = 300
def forward(self, enc_outputs_class):
_, topk_ind = torch.topk(
enc_outputs_class.max(-1).values, self.num_queries, dim=1
)
gather_index = topk_ind.unsqueeze(-1)
return gather_index
def main():
model = Model().cuda()
model.eval()
enc_outputs_class = torch.randn(1, 8400, 80).cuda()
inputs = [
torch_tensorrt.Input(enc_outputs_class.shape),
]
enabled_precisions = {torch.half, torch.float32}
trt_model = torch_tensorrt.compile(
model,
inputs=inputs,
enabled_precisions=enabled_precisions,
truncate_long_and_double=True,
min_block_size=1,
)
if __name__ == "__main__":
main() so it seems like it has something to do with |
Adding import torch
import torch_tensorrt
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.num_queries = 300
def forward(self, enc_outputs_class):
_, topk_ind = torch.topk(
enc_outputs_class.max(-1).values, self.num_queries, dim=1
)
gather_index = topk_ind.unsqueeze(-1)
return gather_index
def main():
model = Model().cuda()
model.eval()
enc_outputs_class = torch.randn(1, 8400, 80).cuda()
inputs = [
torch_tensorrt.Input(enc_outputs_class.shape),
]
enabled_precisions = {torch.half, torch.float32}
trt_model = torch_tensorrt.compile(
model,
inputs=inputs,
enabled_precisions=enabled_precisions,
truncate_long_and_double=True,
min_block_size=1,
output_format="torchscript",
)
if __name__ == "__main__":
main() |
Closing as seems to work properly on main (as of June) |
Confirmed fixed for me using PyTorch 2.3.1 and Torch-TensorRT 2.3.0. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Bug Description
The code below produces the following error:
The same code works fine with Torch-TensorRT 2.1.0-rc9 and PyTorch 2.1.2.
To Reproduce
Expected behavior
Environment
conda
,pip
,libtorch
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: