New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export to ONNX #146
Comments
Thank you for you interest in improving Deepparse. |
Hi @ml5ah, I only used ONNX once, and it was not a successful experience (it was my initial idea to handle Deepparse weights). If I recall right, the bottleneck was that you had to set the batch size to a specific size; thus, it is cumbersome to find an appropriate one. However, it was back in 2019 or so, and things might have evolved since. I will take a look at it. So the best case would be some export method for an AddressParser to export itself into an ONNX format, right? |
I've looked at PyTorch doc, and it still seems like you need to provide a batch-a-like dataset for the export. If you come up with a method that can export the |
Thanks for the reply, @davebulaval! Yes, that's correct - would be the best way. I have been trying to work with the inbuilt export function in torch but keep running into some issues. The export call works but having trouble initializing an inference session in onnxruntime. Fixing the batch_size to be 1 should be good as well, for starters! Do share any insights/suggestions. Thanks! FYI - my error: |
@davebulaval saw your updated reply - got it, that makes sense. Sure, I'll keep you posted if I have any success. |
It seems like a float typing error (it converts some into a float and others into a long float). LSTM parameters are LongTensor, and it may be there the problem. |
@ml5ah this post might be useful https://stackoverflow.com/questions/57299674/trouble-converting-lstm-pytorch-model-to-onnx. |
@ml5ah I've just added the If you need to use the model in another ML framework or code base (e.g. Java, etc.), you can 'simply' load the weights matrix. Usually, it is convenient, but you might need some naming/format conversion. |
Thanks @davebulaval! That function helped and I was able to move forward, albeit faced some more roadblocks along the way. I faced 2 problems:
|
@ml5ah I see. Yeah, it seems like it is not for now. And what exactly is your objective? In which language are you trying to import? |
@davebulaval objective is to deploy the pre-trained address parser model for inference using onnxruntime (in either python or java). |
Ok, I got it. Do you want the address parser as an API-like service? |
Yep, exactly. With the constraint that inference is using onnxruntine with no dependency on PyTorch. |
Keep us updated on your progress. I would love to have 1) the script for ONNX conversion and 2) the script to bundle it into an API. It would be a great doc improvement to have that. |
This issue is stale because it has been open 60 days with no activity. |
@ml5ah hey, im curious if you managed to create an onnx export? If yes it would be great if you could share your insights. |
The last time I checked, my bottleneck with ONNX was that batch size needed to be fixed beforehand. |
@kleineroscar @davebulaval apologies for the late reply. Its been while since I actively looked at this issue. ONNX does provide support for dynamic batch size using the dynamic-axes feature. But, as far as I remember, it did not work out of the box for deepparse till sometime back. I will give it another shot - hopefully things have changed. |
Took another look, this doesn't seem to be a trivial problem. Please share your thoughts. |
The embedding conversion is done outside of the model. Thus, the LSTM expects to receive a 300d vector. I think for simplicity, we could fix a batch size, but it would require doing example padding if the number of addresses to parse is smaller than the batch size. I've added functionality to allow someone to put the weights in an S3 bucket and process data in cloud service. I could be a workaround to create an API in Python. I have a friend working on Burn a Rust Torch-like framework, but LSTM is not yet implemented. I would prioritize using a Rust implementation of Deepparse rather than working on/with ONNX. |
Is your feature request related to a problem? Please describe.
A script to convert the Address Parser (.ckpt) model to ONNX (.onnx)?
Describe the solution you'd like
Has someone successfully converted the address parser model to onnx format?
The text was updated successfully, but these errors were encountered: