Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: clamp would be more efficient to go to Bounded Relu than Maximum + Minimum #266

Closed
BmanClark opened this issue Nov 16, 2023 · 3 comments
Labels
dependencies question Further information is requested

Comments

@BmanClark
Copy link

I was looking at changing the conversion myself, but there isn't a Bounded Relu operator currently, and a whole new operator was a bit too daunting. Bounded Relu won't have the same memory inefficiencies as Maximum/Minimum, is only 1 operator instead of 2, and could be fused with the previous operator if appropriate.
(it's still me looking at the MI-GAN network that I provided a pt file for on a previous issue)

@peterjc123
Copy link
Collaborator

peterjc123 commented Nov 17, 2023

@BmanClark There is no bounded relu op in the builtin operators of TFLite(except Relu6,Relu_N1_1 and Relu0_1), as you can see here.

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/core/kernels/register.cc

@peterjc123
Copy link
Collaborator

@BmanClark I guess you need to create a issue for TFLite. Or, if your request is to run on CPU, then you can create a custom op and then you can update the conversion logic to map to that op.

@peterjc123 peterjc123 added question Further information is requested dependencies labels Nov 17, 2023
@BmanClark
Copy link
Author

Ah, indeed, I am looking at ArmNN's op which must be custom. I'll take it up with them... Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants