Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference for int8 efficientdet-d{$n} is not running unlike efficientdet-lite{$n} #1052

Open
drahmad89 opened this issue Jul 13, 2021 · 3 comments

Comments

@drahmad89
Copy link

I have followed the instruction provided by @fsx950223 to create a int8 quantized tflite model. The quantization was for weights and layers output. The tflite obtained from a efficientdet-d2 checkpoint was not able to run an inference (the script was running for ever without any output and it seems stuck somewhere). I tried to make inference with tflite converted from a efficientdet-lite0 and the inference went just fine.

Note: for the efficientdet-d2-int8.tflite the inference (using keras.eval_tflite) did not print out any error message and kept running forever.

@fsx950223
Copy link
Collaborator

@drahmad89
Copy link
Author

I let it run for 4 hours without any output

@drahmad89
Copy link
Author

tensorflow/tensorflow#40183

The speed is not a problem. In case of d0 the inference is running but slow. but in case of d2 it is not running through one sample inference even if I wait hours.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants