-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dilated and causal convolutions on microcontrollers #48567
Comments
Unassigning myself from the micro issue. |
@njeffrie can you take a look? |
I've seen similar issues, and previously our suggested workaround was to create a model using the conv2d operator instead of conv1d. I don't think conv2d in TFLite or TFLM supports causal padding for convolutions, but somebody more familiar with TFLite would know better than I do. As you suggested, it seems like your custom op approach is likely best. As you stated, you will have to create a python version for use in training, along with a TFLite or TFLM implementation registered with the same custom op name. I don't fully understand the custom op implementation you showed above (probably due to my own ignorance of the subject), but it seems odd to use the global "x" within your custom op declaration, rather than passing in an input tensor. |
Hi @njeffrie, thanks for you reply: I used a global "x" only as a test input for the custom operator. Regarding the following error:
what could it depend on in this case? |
I'm not very familiar with this area, but it seems like this thread may be relevant. Have you tried changing |
I had tried with what you suggested; unfortunately it does not work. |
Could I try to implement a custom operator without using @tf.function? |
I have very little experience with tf functions - perhaps @jdduke can assign this to somebody more familiar. |
It should be that I get ValueError because I can't create a Keras layer in tf.function (the layer should create variables inside it), but I'm not sure how I could define the layer inside tf.function at this point. |
I tried with this code:
Howerer, now I get this error:
I thought I should convert test_input (a tensor) to a numpy array, but I'm not sure on it. |
I've added
before using TF converter. I haven't got errors now, but I would expect it:
Instead, it doesn't happen. What could it depend on in this case? |
I know that I can use Conv2D instead of Conv1D, but I'd like to measure performance of my MCU which uses the same Conv1D. |
Hey @Lucy20211, at the moment, we don't have immediate plans to natively implement Conv1D support, and instead plan to rely on the Conv2D lowering. In theory you could implement Conv1D as a custom op, if you wanted to write a dedicated kernel for it, but it's not clear that you'd see a meaningful resource/performance improvement. |
Ok :) |
Thanks :) |
Actually, now I'm a bit puzzled as to why you're seeing a Conv1D during conversion at all. TF lowers Conv1D to Conv2D automatically see implementation here. As for what a custom op would look like, you have to distinguish between tensors and attributes. Attributes will be embedded in the flexbuffer data for that op, and you would reference it as we do in this custom MFCC op. It might help if you could share the |
I had not thought that Conv1D is replaced by TFLite Conv2D op. It is probably for this reason that the line
doesn't appear. Then, even though there is a custom Conv1D op, the TF to TFLite conversion always favors the TFLite Conv2D op, since it is a builtin op, is it right? |
At the moment I'd try with this model: https://www.programmersought.com/article/13674618779/, for which I'd add the snippet of code concerning the converter. |
Over to @advaitjain for follow-up. |
Similar to tensorflow/tflite-micro#149 (comment), we do not have a direct path to fixing the issue described. Using Conv2D is likely the path of least resistance at the moment (tensorflow/tflite-micro#149 (comment)). |
@tensorflow/micro
System information
Describe the problem
I would like to run inference on a microcontroller above by using a model characterized by Conv1D layers which implement causal convolutions 1D. In particular, in main.cpp I thought I would use something like this:
// Pull in only needed operations (should match NN layers).
// Template parameter <n> is number of ops to be added. Available ops:
//
tensorflow/lite/micro/kernels/micro_ops.h
static tflite::MicroMutableOpResolver <1> micro_op_resolver;
tflite_status = micro_op_resolver.Conv1D();
if (tflite_status != kTfLiteOk) {
error_reporter->Report("Could not add Conv1D op");
while(1);
}
However, the Conv1D operation shouldn't be supported by TensorFlow Lite: how can I solve this problem? Have I create a custom operator, in order to implement the op, or there is another way to fix it?
In order to create the op, I wrote this simple code:
import tensorflow as tf
tf.config.run_functions_eagerly(True)
input_shape = (1, 7, 1)
x = tf.random.normal(input_shape)
@tf.function
def convol1d():
y=tf.keras.layers.Conv1D(1, 3, input_shape=input_shape[1:], name="Conv1D")(x)
return y
data = convol1d()
print("\n\n data is:", data)
tflite_model_name = 'convol1d'
converter= tf.lite.TFLiteConverter.from_concrete_functions([convol1d.get_concrete_function()])
converter.allow_custom_ops = True
tflite_model = converter.convert()
open(tflite_model_name + '.tflite', 'wb').write(tflite_model)
If I run it, it appears the following error:
ValueError: tf.function-decorated function tried to create variables on non-first call.
Instead, I would expect it:
Error: Didn't find custom operator for name 'Conv1D'
Registration failed.
If this latter error appeared, I would try to define the functions Prepare and Eval and construct a TfLiteRegistration in a file .cpp, then I would add an AddCustom call to register.cpp, am I right? At this point, if all went well, I would try to use the op.
Thanks in advance.
The text was updated successfully, but these errors were encountered: