Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault (core dumped) when transfer keras model to trt. #12

Closed
ys0232 opened this issue May 6, 2021 · 4 comments
Closed

Comments

@ys0232
Copy link

ys0232 commented May 6, 2021

Describe the bug
A clear and concise description of what the bug is.
neither show error nor warning, it core dump, when transfer keras model to trt. This keras model is DenseNet121 from keras.
code like that:

import numpy as np
import os

def save_keras_model(model_file):
    from keras.models import Model
    from keras.layers import Dense, Dropout
    import keras.backend as K
    from keras.applications.densenet import DenseNet121
    import keras.layers as layers
    import keras.models as models
    import keras.utils as utils

    class densenet121(object):
        def __init__(self, image_size):
            self.base_model = DenseNet121(input_shape=(image_size, image_size, 3),
                                          include_top=False, pooling='avg',
                                          backend=K,
                                          layers=layers,
                                          models=models,
                                          utils=utils,
                                          weights=None)
            x = Dropout(0.75)(self.base_model.output)
            x = Dense(3, activation='softmax', name='top_layer')(x)
            self.model = Model(self.base_model.input, x)
            print("Densenet121")

    model = densenet121(512).model
    model.save(model_file)

def forward_transfer(model_file):
    import forward
    # 1. 构建 Engine
    builder = forward.KerasBuilder()
    infer_mode = 'float32' # Infer Mode: float32 / float16 / int8_calib / int8
    batch_size = 1
    max_workspace_size = 1<<32

    builder.set_mode(infer_mode)
    engine = builder.build(model_file, batch_size)

    engine_path = os.path.splitext(model_file)[0]+'.engine'
    engine.save(engine_path)

def test_forward(model_file,inputs):
    import forward
    engine_path = os.path.splitext(model_file)[0]+'engine'
    engine = forward.KerasEngine()
    engine.load(engine_path)

    # inputs = np.ones(1, 24, 24, 3)
    outputs = engine.forward([inputs]) # list_type output
    print(outputs)

model_path = 'densenet121.h5'
save_keras_model(model_path)
x = np.ones((1,512,512,3))
forward_transfer(model_path)
test_forward(model_path,x)

Environment

TensorRT Version: 7.1.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 410.104
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.0
PyTorch Version (if applicable): 1.7.0

print info:

[INFO ] 2021-05-06 16:16:20,516 trt_keras_parser.cpp(153): Parser::CreateNHWC2NCHWLayerDesc
[INFO ] 2021-05-06 16:16:20,516 keras_activation_creator.h(50): TrtActivationDesc::Create
Segmentation fault (core dumped)

@ys0232
Copy link
Author

ys0232 commented May 8, 2021

@aster2013 @yuanzexi hi, could anybody help to solve this problem?

@yuanzexi
Copy link
Collaborator

yuanzexi commented May 8, 2021

@ys0232 Sorry for late reply. We have fixed the problem you mentioned above in the latest commit(1a9516e). You could pull the lastest master branch to try your codes again.

@ys0232
Copy link
Author

ys0232 commented May 17, 2021

@yuanzexi Thanks for this efficient work. And then i have another question, can we map one tensorflow operation to servel tensorrt operations in this design framework? Like convert "pack" in tensorflow to "shuffle"+"concatenation" in tensorrt.

@yuanzexi
Copy link
Collaborator

@ys0232 Thanks for your feedback. It seems that you opened another issue #13 for this question. We'll try to figure out this question here #13.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants