Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在瑞芯微3568上已经部署好Fastdeploy。请问example中Yolov5的自己的onnx模型怎么使用呢? #63656

Closed
dxisdx opened this issue Apr 18, 2024 · 1 comment
Labels
status/close 已关闭 type/question 用户提问

Comments

@dxisdx
Copy link

dxisdx commented Apr 18, 2024

请提出你的问题 Please ask your question

在瑞芯微3568上已经部署好Fastdeploy。请问example中Yolov5的自己的目标检测onnx模型怎么使用呢?
例程infer下载的现成的模型是model.pdiparams和model.pdmodel文件
现在我有自己的onnx模型不知道程序里面怎么引用。下面是例程可以跑的程序。检测人的。

import fastdeploy as fd
import cv2
import os

def parse_arguments():
import argparse
import ast
parser = argparse.ArgumentParser()
parser.add_argument("--model", default=None, help="Path of yolov5 model.")
parser.add_argument(
"--image", default=None, help="Path of test image file.")
parser.add_argument(
"--device",
type=str,
default='cpu',
help="Type of inference device, support 'cpu' or 'gpu' or 'kunlunxin'.")
parser.add_argument(
"--use_trt",
type=ast.literal_eval,
default=False,
help="Wether to use tensorrt.")
return parser.parse_args()

def build_option(args):
option = fd.RuntimeOption()
if args.device.lower() == "kunlunxin":
option.use_kunlunxin()

if args.device.lower() == "gpu":
    option.use_gpu()

if args.device.lower() == "ascend":
    option.use_ascend()

if args.use_trt:
    option.use_trt_backend()
    option.set_trt_input_shape("images", [1, 3, 640, 640])
return option

args = parse_arguments()

配置runtime,加载模型

runtime_option = build_option(args)
model_file = os.path.join(args.model, "model.pdmodel")
params_file = os.path.join(args.model, "model.pdiparams")

model = fd.vision.detection.YOLOv5(
model_file,
params_file,
runtime_option=runtime_option,
model_format=fd.ModelFormat.PADDLE)

预测图片检测结果

if args.image is None:
image = fd.utils.get_detection_test_image()
else:
image = args.image
im = cv2.imread(image)
result = model.predict(im)
print(result)

预测结果可视化

vis_im = fd.vision.vis_detection(im, result)
cv2.imwrite("visualized_result.jpg", vis_im)
print("Visualized result save in ./visualized_result.jpg")
微信截图_20240418162953
微信截图_20240418163006
微信截图_20240418163014

@dxisdx dxisdx added status/new-issue 新建 type/question 用户提问 labels Apr 18, 2024
@caizejun
Copy link
Contributor

https://github.com/PaddlePaddle/FastDeploy/blob/develop/README_CN.md fastdeploy是支持直接部署onnx模型的 这边是fastdeploy仓库 ,你可以找找示例api 看看怎么用

@paddle-bot paddle-bot bot added status/following-up 跟进中 status/close 已关闭 and removed status/new-issue 新建 labels Apr 19, 2024
@paddle-bot paddle-bot bot closed this as completed Apr 22, 2024
@paddle-bot paddle-bot bot removed the status/following-up 跟进中 label Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/close 已关闭 type/question 用户提问
Projects
None yet
Development

No branches or pull requests

2 participants