Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dilated and causal convolutions on microcontrollers #48567

Closed
Lucy20211 opened this issue Apr 16, 2021 · 21 comments
Closed

Dilated and causal convolutions on microcontrollers #48567

Lucy20211 opened this issue Apr 16, 2021 · 21 comments
Assignees
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug

Comments

@Lucy20211
Copy link

Lucy20211 commented Apr 16, 2021

@tensorflow/micro

System information

  • OS Platform and Distribution: Linux Ubuntu 20.10
  • TensorFlow version: 2.4.1
  • Python version : 3.8

Describe the problem

I would like to run inference on a microcontroller above by using a model characterized by Conv1D layers which implement causal convolutions 1D. In particular, in main.cpp I thought I would use something like this:

// Pull in only needed operations (should match NN layers).
// Template parameter <n> is number of ops to be added. Available ops:
// tensorflow/lite/micro/kernels/micro_ops.h

static tflite::MicroMutableOpResolver <1> micro_op_resolver;
tflite_status = micro_op_resolver.Conv1D();

if (tflite_status != kTfLiteOk) {
error_reporter->Report("Could not add Conv1D op");
while(1);
}

However, the Conv1D operation shouldn't be supported by TensorFlow Lite: how can I solve this problem? Have I create a custom operator, in order to implement the op, or there is another way to fix it?
In order to create the op, I wrote this simple code:

import tensorflow as tf
tf.config.run_functions_eagerly(True)

input_shape = (1, 7, 1)
x = tf.random.normal(input_shape)

@tf.function
def convol1d():
y=tf.keras.layers.Conv1D(1, 3, input_shape=input_shape[1:], name="Conv1D")(x)
return y

data = convol1d()
print("\n\n data is:", data)

tflite_model_name = 'convol1d'
converter= tf.lite.TFLiteConverter.from_concrete_functions([convol1d.get_concrete_function()])
converter.allow_custom_ops = True
tflite_model = converter.convert()
open(tflite_model_name + '.tflite', 'wb').write(tflite_model)

If I run it, it appears the following error:

ValueError: tf.function-decorated function tried to create variables on non-first call.

Instead, I would expect it:

Error: Didn't find custom operator for name 'Conv1D'
Registration failed.

If this latter error appeared, I would try to define the functions Prepare and Eval and construct a TfLiteRegistration in a file .cpp, then I would add an AddCustom call to register.cpp, am I right? At this point, if all went well, I would try to use the op.

Thanks in advance.

@Lucy20211 Lucy20211 added the type:bug Bug label Apr 16, 2021
@saikumarchalla saikumarchalla added comp:lite TF Lite related issues TF 2.4 for issues related to TF 2.4 labels Apr 19, 2021
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Apr 19, 2021
@ymodak ymodak removed their assignment Apr 19, 2021
@abattery abattery assigned ymodak and unassigned abattery Apr 19, 2021
@abattery
Copy link
Contributor

Unassigning myself from the micro issue.

@jdduke jdduke added comp:micro Related to TensorFlow Lite Microcontrollers and removed comp:lite TF Lite related issues labels Apr 23, 2021
@jdduke
Copy link
Member

jdduke commented Apr 23, 2021

@njeffrie can you take a look?

@ymodak ymodak removed their assignment Apr 23, 2021
@njeffrie
Copy link
Contributor

I've seen similar issues, and previously our suggested workaround was to create a model using the conv2d operator instead of conv1d. I don't think conv2d in TFLite or TFLM supports causal padding for convolutions, but somebody more familiar with TFLite would know better than I do.

As you suggested, it seems like your custom op approach is likely best. As you stated, you will have to create a python version for use in training, along with a TFLite or TFLM implementation registered with the same custom op name.

I don't fully understand the custom op implementation you showed above (probably due to my own ignorance of the subject), but it seems odd to use the global "x" within your custom op declaration, rather than passing in an input tensor.

@Lucy20211
Copy link
Author

Lucy20211 commented Apr 27, 2021

Hi @njeffrie, thanks for you reply: I used a global "x" only as a test input for the custom operator. Regarding the following error:

ValueError: tf.function-decorated function tried to create variables on non-first call.

what could it depend on in this case?

@njeffrie
Copy link
Contributor

I'm not very familiar with this area, but it seems like this thread may be relevant.

Have you tried changing
def convol1d(): to def convol1d(x): and renaming the global x to test_input or something? It looks like other examples pass the input into the tf-function rather than referencing it from within.

@Lucy20211
Copy link
Author

I had tried with what you suggested; unfortunately it does not work.

@Lucy20211
Copy link
Author

Could I try to implement a custom operator without using @tf.function?

@njeffrie
Copy link
Contributor

njeffrie commented May 3, 2021

I have very little experience with tf functions - perhaps @jdduke can assign this to somebody more familiar.

@njeffrie njeffrie assigned jdduke and unassigned njeffrie May 3, 2021
@Lucy20211
Copy link
Author

It should be that I get ValueError because I can't create a Keras layer in tf.function (the layer should create variables inside it), but I'm not sure how I could define the layer inside tf.function at this point.

@Lucy20211
Copy link
Author

I tried with this code:

import tensorflow as tf

input_shape = (1, 7, 1)

test_input = tf.random.normal(input_shape)
y = None

@tf.function
def convol1d():
global y
if y is None:
y=tf.keras.layers.Conv1D(1, 3, input_shape=input_shape[1:],name="Conv1D")(test_input)
return y

tflite_model_name = 'convol1d'
converter=tf.lite.TFLiteConverter.from_concrete_functions([convol1d.get_concrete_function()])
converter.allow_custom_ops = True
tflite_model = converter.convert()
open(tflite_model_name + '.tflite', 'wb').write(tflite_model)

Howerer, now I get this error:

AttributeError: 'Tensor' object has no attribute 'numpy'

I thought I should convert test_input (a tensor) to a numpy array, but I'm not sure on it.

@Lucy20211
Copy link
Author

I've added tf.config.run_functions_eagerly(True) again and used

data = convol1d()
print("\n\n data is:", data)

before using TF converter. I haven't got errors now, but I would expect it:

Error: Didn't find custom operator for name 'Conv1D'. Registration failed.

Instead, it doesn't happen. What could it depend on in this case?
Is it correct to use this snippet of code to implement a custom operator?

@Lucy20211
Copy link
Author

Lucy20211 commented May 8, 2021

I know that I can use Conv2D instead of Conv1D, but I'd like to measure performance of my MCU which uses the same Conv1D.

@jdduke
Copy link
Member

jdduke commented May 10, 2021

Hey @Lucy20211, at the moment, we don't have immediate plans to natively implement Conv1D support, and instead plan to rely on the Conv2D lowering. In theory you could implement Conv1D as a custom op, if you wanted to write a dedicated kernel for it, but it's not clear that you'd see a meaningful resource/performance improvement.

@Lucy20211
Copy link
Author

Lucy20211 commented May 12, 2021

Ok :)

@Lucy20211
Copy link
Author

Lucy20211 commented May 12, 2021

Thanks :)

@jdduke
Copy link
Member

jdduke commented May 12, 2021

Actually, now I'm a bit puzzled as to why you're seeing a Conv1D during conversion at all. TF lowers Conv1D to Conv2D automatically see implementation here.

As for what a custom op would look like, you have to distinguish between tensors and attributes. Attributes will be embedded in the flexbuffer data for that op, and you would reference it as we do in this custom MFCC op. It might help if you could share the .tflite model that you successfully converted, which includes the Conv1D op?

@Lucy20211
Copy link
Author

Lucy20211 commented May 12, 2021

I had not thought that Conv1D is replaced by TFLite Conv2D op. It is probably for this reason that the line

Error: Didn't find custom operator for name 'Conv1D'. Registration failed.

doesn't appear.

Then, even though there is a custom Conv1D op, the TF to TFLite conversion always favors the TFLite Conv2D op, since it is a builtin op, is it right?

@Lucy20211
Copy link
Author

Lucy20211 commented May 12, 2021

At the moment I'd try with this model: https://www.programmersought.com/article/13674618779/, for which I'd add the snippet of code concerning the converter.

@jdduke
Copy link
Member

jdduke commented Aug 30, 2021

Over to @advaitjain for follow-up.

@advaitjain
Copy link
Member

Similar to tensorflow/tflite-micro#149 (comment), we do not have a direct path to fixing the issue described. Using Conv2D is likely the path of least resistance at the moment (tensorflow/tflite-micro#149 (comment)).

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug
Projects
None yet
Development

No branches or pull requests

7 participants