Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Leaky Relu quantization support #26755

Closed
xiaomin05 opened this issue Mar 15, 2019 · 10 comments
Closed

Request for Leaky Relu quantization support #26755

xiaomin05 opened this issue Mar 15, 2019 · 10 comments
Assignees
Labels
comp:lite TF Lite related issues comp:ops OPs related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:feature Feature requests

Comments

@xiaomin05
Copy link

Please go to Stack Overflow for help and support:

https://stackoverflow.com/questions/tagged/tensorflow

If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.
  3. It shouldn't be a TensorBoard issue. Those go here.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Linux 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary):
    Source
  • TensorFlow version (use command below):
    1.13.1
  • Python version:
    2.7.12
  • Bazel version (if compiling from source):
    0.21
  • GCC/Compiler version (if compiling from source):
    5.4.0
  • CUDA/cuDNN version:
    No
  • GPU model and memory:- *
    Exact command to reproduce*:
    tflite_convert --output_file=yolo2.tflite --graph_def_file=yolo2.pb --input_arrays=input_1 --output_arrays=conv2d_23/BiasAdd --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --default_ranges_min=0 --default_ranges_max=255

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with:

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
While trying to convert yolo-v2 tensorflow model to quantized tflite model, tflite_convert complains that LeakRelu quantization is not supported yet.

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

2019-03-15 11:04:15.496254: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type LeakyRelu for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
Aborted

@jvishnuvardhan jvishnuvardhan self-assigned this Mar 18, 2019
@jvishnuvardhan jvishnuvardhan added comp:lite TF Lite related issues comp:ops OPs related issues type:feature Feature requests labels Mar 18, 2019
@jvishnuvardhan jvishnuvardhan added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Mar 18, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Mar 23, 2019
@amitsrivastava78
Copy link
Contributor

@xiaomin05, as of now TFlite does not support this, however i have raised one PR which supports 8-bit quantization for leaky relu, till this PR is merged you can play around and test this. In case you find any issues do let me know.
The link for the PR is:-
#27061

Regards
Amit

@xiaomin05
Copy link
Author

thanks for the effort. i reviewed the code. it seems that you need to make change to quantize.cc as well, in order to make tflite_convert work:

index 2fa80f2..45873c2 100644
--- a/tensorflow/lite/toco/graph_transformations/quantize.cc
+++ b/tensorflow/lite/toco/graph_transformations/quantize.cc
@@ -66,7 +66,8 @@ bool SupportsQuantization(const Operator& op) {
type == OperatorType::kPack || type == OperatorType::kTopK_V2 ||
type == OperatorType::kRandomUniform ||
type == OperatorType::kResizeNearestNeighbor ||

  •     type == OperatorType::kPRelu;
    
  •     type == OperatorType::kPRelu ||
    
  •     type == OperatorType::kLeakyRelu ;
    

}

@amitsrivastava78
Copy link
Contributor

@xiaomin05 , thanks for the comments, i will update this part as well, in the mean time can you please let me know if you have tested this implementation ? IF so kindly publish the results as well.

Regards
Amit

amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Mar 26, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Mar 26, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Mar 28, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Apr 2, 2019
@reactivetype
Copy link

Is this for post-training quantization or quantized training? I was able to do --post_training_quantize for a model with LeakyRelu from tf.nn_leaky_relu

amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Apr 5, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Apr 23, 2019
amitsrivastava78 pushed a commit to amitsrivastava78/tensorflow that referenced this issue Apr 28, 2019
@wuhy08
Copy link
Contributor

wuhy08 commented Oct 9, 2019

Hi,

I am trying to perform post training integer quantization with configuration of tf.lite.OpsSet.TFLITE_BUILTINS_INT8. I have LeakyRelu in my model, so it throws an error of:
RuntimeError: Quantization not yet supported for op: LEAKY_RELU. I navigated to here and found the PR. I installed tf-nightly (version: 2.1.0-dev20191009) and tried to convert it again, but the same error RuntimeError: Quantization not yet supported for op: LEAKY_RELU is thrown. I wonder why.

@zye1996
Copy link

zye1996 commented Mar 9, 2020

Hi,

I am trying to perform post training integer quantization with configuration of tf.lite.OpsSet.TFLITE_BUILTINS_INT8. I have LeakyRelu in my model, so it throws an error of:
RuntimeError: Quantization not yet supported for op: LEAKY_RELU. I navigated to here and found the PR. I installed tf-nightly (version: 2.1.0-dev20191009) and tried to convert it again, but the same error RuntimeError: Quantization not yet supported for op: LEAKY_RELU is thrown. I wonder why.

Same here. Did you find a solution?

@wuhy08
Copy link
Contributor

wuhy08 commented Mar 9, 2020

@zye1996

Check #37279, #33397 for current status.

@mohantym
Copy link
Contributor

mohantym commented Mar 4, 2022

Hi @xiaomin05 ! Can we move this issue to closed status now? It seems concerned PR's has been merged from above comment.

@mohantym mohantym self-assigned this Mar 4, 2022
@mohantym mohantym added stat:awaiting response Status - Awaiting response from author and removed stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Mar 4, 2022
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Mar 11, 2022
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues comp:ops OPs related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

9 participants