Skip to content

❓ [Question] Runtime check of the inference platform FP16 support #1139

@gcuendet

Description

@gcuendet

❓ Question

Let's assume one converts a TorchScript to a Torch-TensorRT TorchScript requesting inference type to be FP16. At conversion time, if the GPU doesn't support FP16 (GTX1060 typically), a nice ::torch_tensorrt::Error is thrown saying

Requested inference in FP16 but platform does not support FP16

That's all good.

Now, it seems that if one tries to run an already converted Torch-TensorRT TorchScript with an inference type FP16 on a GPU that doesn't support FP16, there is no check and the program crashes with:

warning: Critical error detected c0000374

Thread 1 received signal SIGTRAP, Trace/breakpoint trap.
0x00007ffac1baf1d3 in ntdll!RtlIsZeroMemory () from C:\WINDOWS\SYSTEM32\ntdll.dll

My questions are:

  • Are these observations (still) correct? I am using torch-tensorRT 1.0, so it might have changed.
  • Is there any plan to check for the device capability at runtime as well, given that it is possible to figure out what the inference type is (don't know if that's possible / easy to do)?

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions