Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EfficientNET.onnx does not run in TensorRT #167

Closed
Soroorsh opened this issue Apr 15, 2020 · 7 comments
Closed

EfficientNET.onnx does not run in TensorRT #167

Soroorsh opened this issue Apr 15, 2020 · 7 comments

Comments

@Soroorsh
Copy link

Hi,
I've got this error for running the converted EfficientNet from PyTorch to Onnx in TensorRT:

Traceback (most recent call last):
  File "tensorrt_python.py", line 59, in <module>
    context = engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

Can anybody help me?

TensorRT version: 6.1.05
Pytorch: 1.1.0
Onnx: 1.5.0

@MartinBrazdil
Copy link

There was an error during engine creation hence engine is None. You have to provide more information how your engine is created because engine creation can fail for various reasons.

@ray-lee-94
Copy link

ray-lee-94 commented Apr 17, 2020

There was an error during engine creation hence engine is None. You have to provide more information how your engine is created because engine creation can fail for various reasons.

See, I convert b2 model to onnx and check results, It looks good.
But when I load the onnx model using tensorRT, it outputs many warnings and costs much longer time than convert resnet50.
image
image

Note the environment :
Windows 10 x64 GPU 1650 CUDA10, pytorch 1.1.0 TensorRT 7.0 onnx 1.6

The output logits are different from the onnx out ( which are the same as pytorch ).

I also try to read some documents about tensorRT, But can not solve this issue.

@Soroorsh
Copy link
Author

There was an error during engine creation hence engine is None. You have to provide more information how your engine is created because engine creation can fail for various reasons.

I use tensorRT python sample for creating the engine.

def build_engine_onnx(model_file):
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(networ$
        builder.max_workspace_size = common.GiB(1)
        builder.max_batch_size =32
        # Load the Onnx model and parse it in order to populate the TensorRT network.
        with open(model_file, 'rb') as model:
            parser.parse(model.read())

I had a problem with converting the EfficientNet from PyTorch to ONNX. ONNX can't export SwishImplementation. I use this line to convert.

model.set_swish(memory_efficient=False)

Is it possible this layer causes the problem?

@Soroorsh
Copy link
Author

Soroorsh commented Apr 18, 2020

Finally, I could run the efficientNet model using this environment:

TensorRT 7
ONNX 1.5.0
Pytorch 1.3.0
torchvision 0.4.2

@DecentMakeover
Copy link

@Soroorsh hey did you give any output names? i am getting the same error

@Soroorsh
Copy link
Author

Soroorsh commented May 25, 2020

@Soroorsh hey did you give any output names? i am getting the same error

Hey,
I'm sorry for the belated answer,

`I've used this code:

model = EfficientNet.from_pretrained("efficientnet-b0")

model.set_swish(memory_efficient=False)

model.eval()

dummy_input = torch.randn(1, 3, 224, 224)

torch.onnx.export(model, dummy_input, "efficientnet-b0.onnx",['input0'])`

@pshwetank
Copy link
Contributor

@kochsebastian Even I am facing the same issue with TensorRT model. As soon as I convert the onnx model to tensorRT version, the performance drops completely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants