-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MNN模型resizeSession之后,推理结果出现较大误差 #2871
Comments
输出不要 resize m_mnnNet_decoder->resizeTensor(output_vector, {2, input_ids_size, 46}); |
试过了,没什么变化 |
直接把onnx模型按照需要的size导出,不设置dynamic_size的话,推理结果是正确的 |
设置 dyamic_size 后导出 onnx ,然后按指定输入用 testMNNFromOnnx.py 测试结果如何? |
int i_modelW2 = input_img->width(); 这一段有点问题,非四维不要用 width/height 等,用 length(0) , length(1) , length(2) |
::memcpy(input_1->writeMap(), src_mask.data(), src_mask.size() * sizeof(bool)); |
是不是resizeSession产生的错误呢 |
1 |
Marking as stale. No activity in 60 days. |
平台(如果交叉编译请再附上交叉编译目标平台):
Platform(Include target platform as well if cross-compiling):
x86 ubuntu22.04
Github版本:
Github Version:
MNN2.8.1
pytorch模型转换为onnx之后(设置了动态输入),转换为mnn进行推理,其中有三个输入,两个输入是固定不变的,一个输入的size每次增加,本模型需要循环推理,第一次推理的输出结果可以与原始模型对应上,第二次resizeSession之后,模型推理的结果不符合预期。代码如下:
`int Decoder::deInfer(const std::vector src, const std::vector src_mask, int input_h, std::vector<std::vector<int32_t>> ids, float* next_token_logits){
#if MODULE
#else
if (!m_mnnNet_decoder){
printf("error: CFaceDetection::FaceDetectImp(), m_mnnNet_det is null.\n");
cout<< 1 <<endl;
return -1;
}
#endif
return 0;
}
`
使用Session和Module两种推理方式,结果都不对
模型导出onnx代码如下:
torch.onnx.export(decoder, (src[0].to(device), src_mask[0].to(device), input_ids.to(device)), "decoder_0515.onnx", input_names=["input1","input2","input3"], output_names=["output"], dynamic_axes={"input1":{2:"input_width"},"input2":{2:"input_width"}, "input3":{1:"length"}}, verbose=True, opset_version=19)
The text was updated successfully, but these errors were encountered: