Skip to content

Paddle模型尝试转换

SWHL edited this page Apr 6, 2022 · 2 revisions

✗(2022-04-06) openvino的mo转模型到IR格式

  • openvino-dev: 2022.1.0
  • 转换命令:
    mo --input_model=resources/models/speedyspeech_csmsc/speedyspeech_csmsc.pdmodel     
  • 报错信息:
    E:\PythonProjects\RapidTTS2>mo --input_model=resources/models/speedyspeech_csmsc/speedyspeech_csmsc.pdmodel
    Model Optimizer arguments:
    Common parameters:
            - Path to the Input Model:      E:\PythonProjects\RapidTTS2\resources/models/speedyspeech_csmsc/speedyspeech_csmsc.pdmodel
            - Path for generated IR:        E:\PythonProjects\RapidTTS2\.
            - IR output name:       speedyspeech_csmsc
            - Log level:    ERROR
            - Batch:        Not specified, inherited from the model
            - Input layers:         Not specified, inherited from the model
            - Output layers:        Not specified, inherited from the model
            - Input shapes:         Not specified, inherited from the model
            - Source layout:        Not specified
            - Target layout:        Not specified
            - Layout:       Not specified
            - Mean values:  Not specified
            - Scale values:         Not specified
            - Scale factor:         Not specified
            - Precision of IR:      FP32
            - Enable fusing:        True
            - User transformations:         Not specified
            - Reverse input channels:       False
            - Enable IR generation for fixed input shape:   False
            - Use the transformations config file:  None
    Advanced parameters:
            - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False       
            - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      False       
    OpenVINO runtime found in:      f:\miniconda3\envs\pytorch\lib\site-packages\openvino
    OpenVINO runtime version:       2022.1.0-7019-cdb9bec7210-releases/2022/1
    Model Optimizer version:        2022.1.0-7019-cdb9bec7210-releases/2022/1
    [libprotobuf ERROR C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\thirdparty\protobuf\protobuf\src\google\protobuf\message_lite.cc:133] Can't parse message of type "paddle.framework.proto.ProgramDesc" because it is missing required fields: (cannot determine missing fields for lite message)
    [ ERROR ]  -------------------------------------------------
    [ ERROR ]  ----------------- INTERNAL ERROR ----------------
    [ ERROR ]  Unexpected exception happened.
    [ ERROR ]  Please contact Model Optimizer developers and forward the following information:
    [ ERROR ]  Check 'm_fw_ptr->ParseFromIstream(&pb_stream)' failed at C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\src\frontends\paddle\src\input_model.cpp:315:
    FrontEnd API failed with GeneralFailure: :
    Model can't be parsed
    
    [ ERROR ]  Traceback (most recent call last):
    File "f:\miniconda3\envs\pytorch\lib\site-packages\openvino\tools\mo\main.py", line 533, in main
        ret_code = driver(argv)
    File "f:\miniconda3\envs\pytorch\lib\site-packages\openvino\tools\mo\main.py", line 489, in driver        
        graph, ngraph_function = prepare_ir(argv)
    File "f:\miniconda3\envs\pytorch\lib\site-packages\openvino\tools\mo\main.py", line 394, in prepare_ir    
        ngraph_function = moc_pipeline(argv, moc_front_end)
    File "f:\miniconda3\envs\pytorch\lib\site-packages\openvino\tools\mo\moc_frontend\pipeline.py", line 29, in moc_pipeline
        input_model = moc_front_end.load(argv.input_model)
    openvino.pyopenvino.GeneralFailure: Check 'm_fw_ptr->ParseFromIstream(&pb_stream)' failed at C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\src\frontends\paddle\src\input_model.cpp:315:
    FrontEnd API failed with GeneralFailure: :
    Model can't be parsed
    
    
    [ ERROR ]  ---------------- END OF BUG REPORT --------------
    [ ERROR ]  -------------------------------------------------
    

✗(2022-04-06) openvino直接推理acoustic步骤下的pdmodel模型

  • openvino: 2022.1.0
  • OS: Windows 10 64位
  • 推理代码:
    from openvino.runtime import Core
    
    # openvino读取paddle模型
    ie = Core()
    paddle_model = ie.read_model(pdmodel_path)
    compiled_model = ie.compile_model(model=paddle_model, device_name='CPU')
    paddle_session = compiled_model.create_infer_request()
  • 报错信息:
    E:\PythonProjects\RapidTTS2>python tts2.py
    初始化前处理部分
    frontend done!
    初始化提取特征模型
    [libprotobuf ERROR C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\thirdparty\protobuf\protobuf\src\google\protobuf\message_lite.cc:133] Can't parse message of type "paddle.framework.proto.ProgramDesc" because it is missing required fields: (cannot determine missing fields for lite message)
    Traceback (most recent call last):
    File "tts2.py", line 36, in <module>
        am_predictor = SpeedySpeechAcoustic(pdmodel_path, pdiparam_path)
    File "E:\PythonProjects\RapidTTS2\acoustic\speedyspeech_csmsc.py", line 28, in __init__
        paddle_model = ie.read_model(pdmodel_path)
    RuntimeError: Check 'm_fw_ptr->ParseFromIstream(&pb_stream)' failed at C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\src\frontends\paddle\src\input_model.cpp:315:
    FrontEnd API failed with GeneralFailure: :
    Model can't be parsed
    

✗(2022-04-06) Paddle2onnx转换acoustic模型到onnx

  • Paddle2onnx: 0.9.2
  • 转换代码:
    paddle2onnx --model_dir resources/models/speedyspeech_csmsc \
                --model_filename resources/models/speedyspeech_csmsc/speedyspeech_csmsc.pdmodel \
                --params_filename resources/models/speedyspeech_csmsc/speedyspeech_csmsc.pdiparams \
                --save_file tmp \
                --opset_version 12
  • 报错信息:
    NotImplementedError:
    There's 2 ops are not supported yet
    =========== conditional_block ===========
    =========== while ===========