-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OrtPerfTuning failed when running with the example. #475
Comments
Update: Actually, please ignore the above. The OrtPerfTuning pass never creates a new |
OK. |
Sorry, I was not clear in my response. My initial response was incorrect so I strike-throughed the comment before adding an update. For context, the output model of olive consists of the actual .onnx file and the corresponding inference settings dict that is used to create the onnxruntime inference session. PerfTuning was not skipped. It ran but chose the original model's inference settings to be the best in the environment you ran it on. On ours, it was not. This did not cause the missing 3rd model. This is expected for perf tuning regardless. The perf-tuning pass doesn't create a new |
So, you mean this error is trying to say that no better solution was found?
Is that possible? The code will download the original model from the Internet. I don't think we could have different results unless they changed the file on the Internet. |
This error does mean that specific set of inference settings combo failed. But this was not the reason a third model file was not created. since the perf tuning pass ran other combos if any.
Yes, this is possible for perf tuning. Per tuning searches through different settings for onnx runtime inference session and chooses the one with the best latency. The latency depends on many factors such as your hardware, etc. |
Thanks for the explanation. I now understand why no model was generated for PerTuning. |
We can use this issue itself. Did you install Olive from pypi or use a specific tag? Could you install olive from source https://microsoft.github.io/Olive/getstarted/installation.html#install-from-source |
I installed it via pip install olive-ai[directml] |
Please try with |
|
I was trying to run the example DML squeezenet. But I got an error.
And it only generated two models. I think it should generate 3. There should be one for OrtPerfTuning.
No change to the code. Does anybody have the same issue?
The text was updated successfully, but these errors were encountered: