Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onnx configuration. #72

Open
grinay opened this issue Nov 15, 2023 · 1 comment
Open

Onnx configuration. #72

grinay opened this issue Nov 15, 2023 · 1 comment

Comments

@grinay
Copy link

grinay commented Nov 15, 2023

Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:

cmake \
    -DWITH_GPU=OFF \
    -DWITH_MKL=ON \
    -DWITH_MKLDNN=ON \
    -DWITH_ONNXRUNTIME=ON \
    -DWITH_AVX=ON \
    -DWITH_PYTHON=OFF \
    -DWITH_TESTING=OFF \
    -DWITH_ARM=OFF \
    -DWITH_NCCL=OFF \
    -DWITH_RCCL=OFF \
    -DON_INFER=ON \
    ..

Following this, I employed the setup as follows:

var recognitionModel = LocalRecognizationModel.EnglishV3;
var config = PaddleDevice.Onnx();
var predictor = recognitionModel.CreateConfig().Apply(config).CreatePredictor();

I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?

Additional into:
I found in a logs:
image
It seems to be converting by itself.
However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?

@sdcb
Copy link
Owner

sdcb commented Nov 30, 2023

It's happening in memory, you can't find this model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants