Skip to content

Model Conversion

Po-Ting Ko edited this page Jan 14, 2024 · 3 revisions

Convert *.pt

In this part, we could just use the export tool from the yolov7.

  • $ python export.py --weights yolov7.pt --grid --simplify --img-size 640 640
  • note that the --end2end operation is not support in qualcomm snpe, thus we didn’t apply it.

In the export process, on line 159, it is important to set the opset_version to 11.

image

It is recommended to install ONNX before proceeding. Qualcomm specifically requires ONNX version 1.6.0. However, it is advised to install version 1.8.1 of ONNX, as it might provide better compatibility. This version should be suitable for converting YOLOv7 models.

Convert *.onnx

Model conversion itself is not complex because the model simply represents a sequence of operations. However, different frameworks, often driven by distinct specifications and implementations due to various hardware platforms, necessitate model conversion between them. This process is typically facilitated using the Open Neural Network Exchange (ONNX) format.

In such cases, it is necessary to troubleshoot by examining the network layers to identify the unsupported operations. The conversion failure logs also provide useful hints and information to assist in resolving these issues.


  • $ snpe-onnx-to-dlc --input_network yolov7.onnx --output_path yolov7.dlc
  • Assuming the use of official pre-trained models, if there are a series of WARNING logs during the conversion

image

  1. WARNING_OP_VERSION_NOT_SUPPORTED:can ignore

  2. RuntimeWarning:error_message=is not support in AIP:can ignore

    The reason is that AIP (Assisted Input Preparation) supports only 4D input and output sizes for, while the dimensions of the current layers are 5D. Layers that are not supported by the runtime will be fallbacked to run on the CPU. Therefore, this should not impact the forward inference process.

Using Netron to analyze the exported ONNX model, you can observe that the unsupported layers are treated as part of the post-processing for forward inference.

image

From the information obtained after the Reshape operation, it is observed that the output becomes 5 dimensional. Therefore, the goal is to split the output before Reshape, specifically at the output of the Convolutional layer. Referring to the table on the right, you mentioned that the name of the Convolutional layer is Conv_296 and the output's name is 489There are also two adjacent nodes, and you can obtain their names as well. Remembering the name of the Convolutional layer is important for future reference.

Now, you can proceed with the conversion of the *.dlc file.

  • $ snpe-onnx-to-dlc --input_network yolov7.onnx --output_path yolov7.dlc --out_node 489 --out_node 524 --out_node 559
Clone this wiki locally