Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address missing TensorFlow operations to TFLite: #21526

Closed
8 tasks
suharshs opened this issue Aug 9, 2018 · 99 comments
Closed
8 tasks

Address missing TensorFlow operations to TFLite: #21526

suharshs opened this issue Aug 9, 2018 · 99 comments
Assignees
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:feature Feature requests

Comments

@suharshs
Copy link

suharshs commented Aug 9, 2018

We track operations that we need add to TensorFlow Lite here:

Please comment with new operations you may want, we will add them to the list and remove your comment. Thanks!

@AnishShah
Copy link
Contributor

Hi, If anyone's not working on this, then I would like to work on it.

@synchro10
Copy link

Hi, is it worth waiting for implementation FakeQuantWithMinMaxVarsPerChannel?

@jdduke
Copy link
Member

jdduke commented Nov 16, 2018

Hi all,

As we work toward fleshing out the builtin op library for TensorFlow Lite, we've been working on an experimental feature that allows using select TensorFlow ops from within the TensorFlow Lite runtime. The goal is to help reduce some of the friction for using models that rely on ops not yet natively supported by TensorFlow Lite (at the cost of increased binary size). This feature requires opting in during model conversion, as well as adding an additional dependency. More details can be found here.

Feedback is very much appreciated (either via GitHub or directly via tflite@tensorflow.org), and we'll be adding and refining functionality over the coming weeks. Cheers.

@jdduke
Copy link
Member

jdduke commented Nov 16, 2018

Hi, is it worth waiting for implementation FakeQuantWithMinMaxVarsPerChannel?

It is unlikely that we'll adding additional support for FakeQuant ops in the near future. Your best bet is to look into using post-training quantization.

@StephenLee2016
Copy link

@suharshs @jdduke Hi, I want to know when conv3d feature will be release?

@andrehentz
Copy link
Contributor

Hi @StephenLee2016 Conv3D is one of the select TF ops supported via tflite_convert.

@pxEkin
Copy link

pxEkin commented Nov 24, 2018

Hi all,

As we work toward fleshing out the builtin op library for TensorFlow Lite, we've been working on an experimental feature that allows using select TensorFlow ops from within the TensorFlow Lite runtime. The goal is to help reduce some of the friction for using models that rely on ops not yet natively supported by TensorFlow Lite (at the cost of increased binary size). This feature requires opting in during model conversion, as well as adding an additional dependency. More details can be found here.

Feedback is very much appreciated (either via GitHub or directly via tflite@tensorflow.org), and we'll be adding and refining functionality over the coming weeks. Cheers.

Hi all,

As we work toward fleshing out the builtin op library for TensorFlow Lite, we've been working on an experimental feature that allows using select TensorFlow ops from within the TensorFlow Lite runtime. The goal is to help reduce some of the friction for using models that rely on ops not yet natively supported by TensorFlow Lite (at the cost of increased binary size). This feature requires opting in during model conversion, as well as adding an additional dependency. More details can be found here.

Feedback is very much appreciated (either via GitHub or directly via tflite@tensorflow.org), and we'll be adding and refining functionality over the coming weeks. Cheers.

TF version is 1.12.0, I can not find target_ops options in tflite_convert, why?

@andrehentz
Copy link
Contributor

@ToBigboss Please note that you will need to build from source to gain early access to the new features.

@mirkomartn
Copy link

@Lucy20211

Is it worth waiting for implementation of Conv1D? I know that I can use Conv2D instead of Conv1D, but I have to measure performance of my MCU which uses the same Conv1D and make a comparison with Conv2D for different models (if I implemented Conv1D with Conv2D, it should use the same resources and complexity of a Conv2D, so I wouldn't do anything).

Facing the same issue, only that I don't know how to use Conv2D instead of Conv1D. Could you point me to some tutorial/reference?

@Lucy20211
Copy link

Hi @mirkomartn,

what you could do is well summarized in the last comment here: #43141.

@mirkomartn
Copy link

@Lucy20211 Great, thank you!

@avroshk
Copy link
Contributor

avroshk commented Jun 4, 2021

Wanted to add tf.Selu to the list of tflite ops for promotion. It is available using Select TF Ops but will be nice to have it as a built-in since we already have ELU. Thanks!

@thaink
Copy link
Member

thaink commented Jul 5, 2021

#50595 requests BroadcastGradientArgs, DynamicStitch, EluGrad, Sign, StridedSliceGrad, UnsortedSegmentSum

@r-wheeler
Copy link

Can tf.einsum be supported?

@thaink
Copy link
Member

thaink commented Jul 14, 2021

Can tf.einsum be supported?

tf.einsum with static shape is fully supported via our converter. I think you need dynamic shape, right?

@AnastGerus
Copy link

Hi,
Could you please assist with AssignVariableOp, ReadVariableOp, VarHandleOp ? Is it worth waiting?
I know it's possible to use it with SELECT_TF_OPS, but such an option increases dll size significantly, so it would be nice to have it as a build-in. Thanks!

@jdduke jdduke removed their assignment Aug 30, 2021
@hamlatzis
Copy link

Failed porting TensorListFromTensor as a custom op. Is it going to be added? I know I can use it through select ops, which will create a Flex version with the new experimental converter. But the .aar file created for Android gets too large

@aselle aselle removed their assignment Jan 13, 2022
@zacps
Copy link

zacps commented Jan 13, 2022

Would be useful to have:

AvgPool3D, MaxPool3D, TensorListFromTensor, TensorListGetItem, TensorListReserve, TensorListSetItem, TensorListStack

Model is I3D with a custom top layer.

@mohantym
Copy link
Contributor

Hi @suharshs !
I think Quantized Div error has been taken care of in the 2.8 version now. Attached gist and relevant thread 1 and 2 for reference.
Thank you!

@mohantym mohantym self-assigned this Jul 11, 2022
@mohantym mohantym removed their assignment Jul 21, 2022
@lp0617
Copy link

lp0617 commented Jul 27, 2022

I want to let my model .pb file to .tflite, and run on edge device.
But I have an error about the "MatrixBandPart" Ops, I want to ask this Ops is still not support?

The error is :
:0: error: failed while converting: 'main': Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.MatrixBandPart {device = ""}

I also get some trouble about :
WARNING:absl:Found untraced functions such as dense_32_layer_call_and_return_conditional_losses, dense_32_layer_call_fn, embedding_layer_call_and_return_conditional_losses, embedding_layer_call_fn, dropout_layer_call_and_return_conditional_losses while saving (showing 5 of 295). These functions will not be directly callable after loading.

Do you have any solution of these error?

@mohantym
Copy link
Contributor

@lp0617 !
The Above warning does not impact much TFLiteconversion. To use custom op, You need to register the op in TFlite kernel and use custom ops syntax during TFLiteconversion.
converter.allow_custom_ops = True

Attached relevant thread for reference.

Thank you!

@sachinprasadhs sachinprasadhs self-assigned this Oct 7, 2022
@sachinprasadhs
Copy link
Contributor

Since all the issues mentioned in the task is closed, cloud you please close this issue. Thanks!

@sachinprasadhs sachinprasadhs added the stat:awaiting response Status - Awaiting response from author label Oct 7, 2022
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Oct 14, 2022
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

@ZYX-MLer
Copy link

tf.raw_ops.MatrixInverse is not supported in tflite, while Op BatchMatrixInverse is not available in GraphDef version 1205. How can i calculate the inverse of the matrix in tflite?

@Jenilshyara1
Copy link

request to add tf.keras.layers.TextVectorization in tflite ops

@BencePalos
Copy link

TensorListReserve would be much appreciated

@mogokhalifa
Copy link

Request to add tf.RealDiv and tf.erf please. Thanks!

@stallam-unb
Copy link

Request to add Recurrent Layers (GRU, LSTM).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:feature Feature requests
Projects
None yet
Development

No branches or pull requests