-
Notifications
You must be signed in to change notification settings - Fork 669
Issues: NVIDIA-AI-IOT/torch2trt
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
ROOT CAUSE?
InstanceNorm3D
with tensorRT result consistency with nn.Instance_norm3d
#938
opened Jun 21, 2024 by
Yuxiang1990
Unable to convert SigLIP text transformer due to missing model input when exporting model to onnx
#931
opened Jun 8, 2024 by
aliencaocao
There are questions about the parameters needed for forward propagation in model transformation
#926
opened May 11, 2024 by
huangshilong911
Package is not PEP 517 compliant, making it incompatible with package managing tools
#919
opened Mar 26, 2024 by
elisa-aleman
[Question & FeatureRequest] Is that possible to convert a torch.nn.Module to TRTMoudle?
#917
opened Feb 23, 2024 by
ElinLiu0
python model successfully tested on polygraphy tensorrt, but failed, when loading on torch2trt
#915
opened Jan 4, 2024 by
ninono12345
'tensorrt.tensorrt.Builder' object has no attribute 'build_cuda_engine'
#913
opened Dec 31, 2023 by
maxmelichov
Inconsistent inference results between PyTorch and converted TensorRT model with MaxPool2d operator
#909
opened Dec 8, 2023 by
hongliyu0716
Inconsistent inference results between PyTorch and converted TensorRT model with Selu operator
#908
opened Dec 8, 2023 by
hongliyu0716
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.