-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal compiler error #31368
Comments
@DocDriven I have same issue. |
@cuongdv1 Unfortunately, I haven't figured it out yet because as far as I know, the source code for the compiler is not open source. Therefore, I couldn't debug it. Best bet is to wait for a new release of the compiler and try again. |
I'll add that I get this error when I try to compile an object detection tflite model produced by Google Cloud AutoML. Also using Edge TPU Compiler version 2.0.258810407 |
Are there any updates on this topic? I have come across this problem multiple times now, even with networks that are shipped with keras (e.g. VGG16). The test code for this is below.
I used the Tiny ImageNet dataset for the post training quantization. Also, my test picture is the one from the Coral demo, which I have attached. It should output magpie (bird.jpg). I can produce a tflite file with this code, but the TPU compiler throws the "Internal compiler error" again. Can you please confirm to me, if this is reproducable? |
I have the same error using tensorflow 2.0 nightly and tensorflow 1.0 nightly. Any update on this? Since the error is very generic, it is very hard to debug... |
@Lap1n |
I had the same error using the MobileNet v2 model in Keras with the tiny-imagenet-200 dataset. The TPU compiler version was 2.0.267685300. The quantized tflite file was produced successfully, but it cannot be compiled. |
@ynorz can you show me the code you used for converting and quantizing your model? |
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, '/')
# The second to last is the class-directory
return parts[-3] == CLASS_NAMES
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# resize the image to the desired size.
return tf.image.resize(img, [224, 224])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return label, img
data_dir = '/my_data_dir'
data_dir = pathlib.Path(data_dir)
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*/*'))
image_count = len(list(data_dir.glob('*/*/*.JPEG')))
CLASS_NAMES = np.array([item.name for item in data_dir.glob('*')])
labeled_ds = list_ds.map(process_path, num_parallel_calls=100)
tf.compat.v1.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_data_gen():
for _,image in labeled_ds.take(100):
image = tf.expand_dims(image, 0)
yield [image]
converter.representative_dataset = tf.lite.RepresentativeDataset(representative_data_gen)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converted_tflite_model = converter.convert()
open(TFLITE_MODEL,"wb").write(converted_tflite_model) |
@bhavitvyamalik |
There can be 2 possibilities why your model is giving internal compiler error.
|
I tried to do the post-training integer quantization following the official guide on MNIST. And this guide can only run on tensorflow 1.15.0, which, to some extent, did prove your point that tensorflow 1.15 worked better. However, I still get the internal compiler error with compiler version: 2.0.267685300. |
If you are getting internal compiler error then your operations are supported by the Edge TPU during compilation. However it can still run on the CPU of your Edge TPU but it'll increase the inference time to a large extent. Most importantly, you can compile only these models successfully on Edge TPU:
If you'll use any other model, it might not work properly. Try using one of these models followed by quantization using the code I posted earlier. It should work flawlessly. |
Figured out a solution that sounds stupid. I moved the folder 'models' with .tflite file to '/home/username/edgetpu', and then the compiler works with the same compile code provided on the official website. This 'edgetpu' folder was created through a beginner object detection retrain example using dataset of American bulldog and Abyssinian provided on the official website. My setup: custom dataset, mobilenet_v1 or mobilenet_v2 downloaded from coral website, coral accelerator. |
On this same example code, I initially received this error after compiling with edgetpu_compiler output_tflite_graph.tflite:
But was able to get around it after I ran with sudo, which gives the following output:
Note: I didn't have to move the files around as mentioned in the previous post. |
I'am having same issue with a tflite model with transpose convolution. Tensorflow 1.x does not seem to support transpose convolution. With latest tf2.0-nighlty quantized tflite model it gives the error: 'Internal compiler error. Aborting!. It would be helpful, if the compiler exactly prints the cause of failure i.e. if some operators or it's version is not supported. It seem to works until some random convolutional layer(602) and produces compiler error after its inclusion!!! |
I am also having a similar issue. I have a custom model which uses transpose convolution that I want to compile for edge tpu. Was there any solution? |
Have you tried |
Hi There, We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help. This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information. |
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
I have created a fully-quantized tf lite model from a saved model. But trying to compile it with the edgetpu_compiler, I get an error:
Error message is unfortunately not very helpful. The non-compiled version is loadable and produces the correct results.
I have attached the model that I try to compile, as well as its visualization (via visualize.py).
litemodel.tar.gz
The text was updated successfully, but these errors were encountered: