Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asking about coral delegates #56

Open
akunerio opened this issue Jun 20, 2022 · 3 comments
Open

Asking about coral delegates #56

akunerio opened this issue Jun 20, 2022 · 3 comments
Assignees

Comments

@akunerio
Copy link

Hello,

I tried this example and run it on raspberry pi with coral. It work well. Thank you…
I have question, little bit out contexts.

however, when I tried to implement the coral delegates for @tensorflow/qna is difficult. Since the mobileBert pre-trained model is auto load from the qna package.

it is still possible using Coral delegates for qna? Or Coral only work for object detection model like in this tutorial? Thank you..

@pyu10055
Copy link
Collaborator

pyu10055 commented Sep 6, 2022

coral could work for qna models, but the model needs to be converted for coral device.

@mattsoulanille
Copy link
Member

mattsoulanille commented Sep 6, 2022

In general, you can convert a model using the Edge TPU Compiler. You can try it out on Google Colab without installing it.

In this case though, the mobilebert qna model that @tensorflow-models/qna uses does not have any Coral-supported ops.

Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.

Model compiled successfully in 520 ms.

Input model: mobilebert_1_default_1.tflite
Input size: 95.80MiB
Output model: mobilebert_1_default_1_edgetpu.tflite
Output size: 95.70MiB
On-chip memory used for caching model parameters: 0.00B
On-chip memory remaining for caching model parameters: 0.00B
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 0
Total number of operations: 2542
Operation log: mobilebert_1_default_1_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 0
Number of operations that will run on CPU: 2542
See the operation log file for individual operation details.
Compilation child process completed within timeout period.
Compilation succeeded! 

Note these lines:

Number of operations that will run on Edge TPU: 0
Number of operations that will run on CPU: 2542

This model will need to be quantized to Uint8 and possibly re-trained in order to work on Coral. Even then, I'm not sure all of the ops will be supported. Coral seems to support mostly image-related models, and some audio models, but you can definitely try it and see. Here are some more details on the model requirements for Coral.

Edit: I accidentally hit close on this issue instead of posting this.

@akunerio
Copy link
Author

akunerio commented Sep 7, 2022

Thank you for your explanation. Now, I understand.

However, the BERT model cannot be quantized to Uint8 and the model cannot be used with Edge TPU.
https://github.com/sayakpaul/BERT-for-Mobile/blob/master/DistilBERT_SST-2_TPU.ipynb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants