❓ Question
I'm developing a C++ inference server to deploy Torch-TensorRT models and TorchScript models. Since the Torch-TensorRT compilation process is done AOT, Is there a way to know wether the given .pt model file is a Torch-TensorRT model or a pure TorchScript model?
Thanks!