You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:
var recognitionModel = LocalRecognizationModel.EnglishV3;
var config = PaddleDevice.Onnx();
var predictor = recognitionModel.CreateConfig().Apply(config).CreatePredictor();
I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?
Additional into:
I found in a logs:
It seems to be converting by itself.
However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?
The text was updated successfully, but these errors were encountered:
Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:
Following this, I employed the setup as follows:
I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?
Additional into:
![image](https://private-user-images.githubusercontent.com/13212299/283147738-8148593e-2884-47dd-9f5c-9e2d1aa5e8ce.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk1NzE0NjUsIm5iZiI6MTcxOTU3MTE2NSwicGF0aCI6Ii8xMzIxMjI5OS8yODMxNDc3MzgtODE0ODU5M2UtMjg4NC00N2RkLTlmNWMtOWUyZDFhYTVlOGNlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI4VDEwMzkyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQwOGU5YzFlYjlkYmE1MjNkMTIyMWViMzRkM2JlZTc1NDg4OGU0Y2JkNDc3YTljODk3ZDM2YWY5MDA2YWFiYzkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Px_xfmcINtmIbpYI7bBuLB-FF2Aid6ebxdi91pMr_zM)
I found in a logs:
It seems to be converting by itself.
However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?
The text was updated successfully, but these errors were encountered: