-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pymnn inference quality is unstable #2867
Comments
Furthermore, it seems the problem only happens if the initialization+inference code is in a separate Process (production environment). In a single thread, single process test case, the problem seems to never occur. |
It seems like adding dynamic=True makes sense, since decoder input shape always changes, and output shape also always changes. Is there anything I can do to improve inference speed? |
Dynamic=True will load module as expr function and decrease inference speed. It's for mnn to train model. |
You can try not use raw numpy. Fully use MNN.numpy instead of numpy. The numpy data convert with MNN may cause error. |
Thanks for your help, @jxt1234 ! Can you explain more about fully using MNN.numpy? The values of Also, is there a safer way to convert back from MNN.numpy to numpy? Again, I don't mind if that is slow. |
@jxt1234 I have uploaded a simple test to reproduce the issue: Once you download and expand, there are 3 components:
Please do let me know if you have problems running this test, and once again, thanks for helping! |
Marking as stale. No activity in 60 days. |
平台(如果交叉编译请再附上交叉编译目标平台):
Platform(Include target platform as well if cross-compiling):
Model converter compilation, model conversion, and pymnn compilation were all done on device (==Orange Pi 5 Pro, using CPU, which is Arm A76+A55).
Github版本:
Github Version:
Tested on both 2.8.3 (2972fe7) and 2.8.4 (5895243)
编译方式:
Compiling Method
For conversion (hifigan audio generation model with randomness removed):
Conversion is successful, and testMNNFromOnnx.py is also successful (with randomness removed from the model code):
I converted the model using a simple command:
pymnn was installed using https://github.com/alibaba/MNN/blob/master/pymnn/INSTALL.md. No problems (again, I tried both 2.8.3 and 2.8.4; the final result was the same).
Inference with pymnn, the quality of audio produced is equal to onnx output 90% of the time. The other 10% of the time, output quality is horrible. And the output quality is fixed after model initialization - if the first output is high quality, all outputs afterwards are high quality. If the first output is poor, all outputs afterwards are poor.
This is my model initialization and inference code:
The text was updated successfully, but these errors were encountered: