Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32). #53

Closed
vishal-patel17 opened this issue Aug 30, 2019 · 8 comments

Comments

@vishal-patel17
Copy link

vishal-patel17 commented Aug 30, 2019

Hello, I am facing this issue when trying to run the code:

var imageBytes = (await rootBundle.load(image.path)).buffer;
    img.Image oriImage = img.decodeJpg(imageBytes.asUint8List());
    img.Image resizedImage = img.copyResize(oriImage, height: 112, width: 112);
    var recognitions = await Tflite.runModelOnBinary(
      binary: imageToByteListFloat32(resizedImage, 112, 127.5, 127.5),
      numResults: 6,
      threshold: 0.05,
    );
  }

  Uint8List imageToByteListFloat32(
      img.Image image, int inputSize, double mean, double std) {
    var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
    var buffer = Float32List.view(convertedBytes.buffer);
    int pixelIndex = 0;
    for (var i = 0; i < inputSize; i++) {
      for (var j = 0; j < inputSize; j++) {
        var pixel = image.getPixel(j, i);
        buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
        buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
        buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
      }
    }
    return convertedBytes.buffer.asUint8List();
}

I am using model, generated using AutoML in firebase.

Error is as follows:
Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32).

@vishal-patel17 vishal-patel17 changed the title Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 150528 bytes and a ByteBuffer with 120000 bytes. Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32). Aug 30, 2019
@vishal-patel17
Copy link
Author

Also facing the same issue upon using:
Tflite.runModelOnImage(path: image.path);

@shaqian
Copy link
Owner

shaqian commented Sep 19, 2019

Hi Vishal,

Can you check if the error happens when feeding input tensor or output tensor?

You can set a breakpoint at the following line. If you are able to get here, the error is because the output tensor is of type uint8 but labelProb is float32.
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L452

The definition of labelProb:
https://github.com/shaqian/flutter_tflite/blob/master/android/src/main/java/sq/flutter/tflite/TflitePlugin.java#L55

Output of image classification is usually float number between 0 and 1. You may need to check how the model is trained.

Thanks,
Qian

@shaqian
Copy link
Owner

shaqian commented Oct 5, 2019

I'm archiving this thread. Feel free to reopen if you have further questions.

Thanks,
Qian

@PepeExpress
Copy link

Im facing same issue both when using Tflite.runModelOnImage(path: image.path); and await Tflite.runModelOnBinary(binary:binary);

I attach an image with model properties of the tflite model I'm using.
netron_model

@zoraiz-WOL
Copy link

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf
assert tf.version.startswith('2')

from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

#to unzip a rar
!unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder')
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
model = image_classifier.create(train_data, validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

@2shrestha22
Copy link

Anyone having this issue should export model with float16 quantization.

config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

@elkhalifte
Copy link

elkhalifte commented Apr 15, 2022

inside your code change the output from float to byte and finally get the float value from byte data.
before:

float[][] labelProb = new float[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) { 
float confidence = labelProb[0][i]; 
} 

after:

byte[][] labelProb = new byte[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) {
 float confidence = (float)labelProb[0][i];
 }

i might send a pull request for this.

@umang752
Copy link

umang752 commented Nov 24, 2023

I am getting this error,

Cannot copy to a TensorFlowLite tensor (input_1) with 602112 bytes from a Java Buffer with 150528 bytes.

`import os

import numpy as np

import tensorflow as tf

from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader

EXPORT_DIR = '/home/ailabs/work/TFLite/Model/'
CAR_POTO_DIR = '/home/ailabs/work/TFLite/car_photos/'
EPOCHS = 1

data = DataLoader.from_folder(CAR_POTO_DIR)

train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)

model = image_classifier.create(train_data, epochs=EPOCHS, validation_data=validation_data)

loss, accuracy = model.evaluate(test_data)

config = QuantizationConfig.for_float16()

model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_quant.tflite',quantization_config=config,export_format=ExportFormat.TFLITE)
model.export(export_dir=EXPORT_DIR,tflite_filename='coco_ssd_mobilenet_v1_1.0_labels.txt',export_format=ExportFormat.LABEL)
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants