-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for Leaky Relu quantization support #26755
Comments
This is also part of issue tensorflow#26755
@xiaomin05, as of now TFlite does not support this, however i have raised one PR which supports 8-bit quantization for leaky relu, till this PR is merged you can play around and test this. In case you find any issues do let me know. Regards |
thanks for the effort. i reviewed the code. it seems that you need to make change to quantize.cc as well, in order to make tflite_convert work: index 2fa80f2..45873c2 100644
} |
@xiaomin05 , thanks for the comments, i will update this part as well, in the mean time can you please let me know if you have tested this implementation ? IF so kindly publish the results as well. Regards |
This is also part of issue tensorflow#26755
THis is for the issue tensorflow#26755.
This is also part of issue tensorflow#26755
This is also part of issue tensorflow#26755
Is this for post-training quantization or quantized training? I was able to do --post_training_quantize for a model with LeakyRelu from tf.nn_leaky_relu |
This is also part of issue tensorflow#26755
This is also part of issue tensorflow#26755
This is also part of issue tensorflow#26755
Hi, I am trying to perform post training integer quantization with configuration of |
Same here. Did you find a solution? |
Hi @xiaomin05 ! Can we move this issue to closed status now? It seems concerned PR's has been merged from above comment. |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
Please go to Stack Overflow for help and support:
https://stackoverflow.com/questions/tagged/tensorflow
If you open a GitHub issue, here is our policy:
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
No
Linux 18.04
Source
1.13.1
2.7.12
0.21
5.4.0
No
Exact command to reproduce*:
tflite_convert --output_file=yolo2.tflite --graph_def_file=yolo2.pb --input_arrays=input_1 --output_arrays=conv2d_23/BiasAdd --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --default_ranges_min=0 --default_ranges_max=255
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with:
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
While trying to convert yolo-v2 tensorflow model to quantized tflite model, tflite_convert complains that LeakRelu quantization is not supported yet.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
2019-03-15 11:04:15.496254: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type LeakyRelu for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
Aborted
The text was updated successfully, but these errors were encountered: