-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32). #53
Comments
Also facing the same issue upon using: |
Hi Vishal, Can you check if the error happens when feeding input tensor or output tensor? You can set a breakpoint at the following line. If you are able to get here, the error is because the output tensor is of type uint8 but The definition of Output of image classification is usually float number between 0 and 1. You may need to check how the model is trained. Thanks, |
I'm archiving this thread. Feel free to reopen if you have further questions. Thanks, |
use this code to train you custom model import os import numpy as np import tensorflow as tf from tflite_model_maker import model_spec import matplotlib.pyplot as plt #to unzip a rar data = DataLoader.from_folder('path-of-custom-folder') |
Anyone having this issue should export model with float16 quantization.
|
inside your code change the output from float to byte and finally get the float value from byte data.
after:
i might send a pull request for this. |
I am getting this error, Cannot copy to a TensorFlowLite tensor (input_1) with 602112 bytes from a Java Buffer with 150528 bytes.`import os import numpy as np import tensorflow as tf from tflite_model_maker import model_spec EXPORT_DIR = '/home/ailabs/work/TFLite/Model/' data = DataLoader.from_folder(CAR_POTO_DIR) train_data, rest_data = data.split(0.8) model = image_classifier.create(train_data, epochs=EPOCHS, validation_data=validation_data) loss, accuracy = model.evaluate(test_data) config = QuantizationConfig.for_float16() model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_quant.tflite',quantization_config=config,export_format=ExportFormat.TFLITE) |
Hello, I am facing this issue when trying to run the code:
I am using model, generated using AutoML in firebase.
Error is as follows:
Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32).
The text was updated successfully, but these errors were encountered: