Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Troubles with Quantized TFLite models #4

Closed
selimbek opened this issue Jan 10, 2020 · 3 comments
Closed

Troubles with Quantized TFLite models #4

selimbek opened this issue Jan 10, 2020 · 3 comments
Labels
custom model question Further information is requested

Comments

@selimbek
Copy link

selimbek commented Jan 10, 2020

Hi,

Firstly thanks for this repo, exemples are clear and works perflectly well. I succesfully integrate my own UNET model using your Deeplab scene implementation.

But today i tried to get more deeper inside optimisation. And i tried to use Quantized models.
I took mobilenetv2_coco_voc_trainaug_8bit from the official list (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md) And injected it to the Deeplab scene.
I fixed some issues do to the fact that ArgMax is already made inside the model and the output is [1, 513, 513] instead of the previously [1, 513, 513, 21]

Your DeepLab class became:


 public class DeepLab2 : BaseImagePredictor<float>
    {
        ...

        float[,] outputs0; // height, width

        ...

        public DeepLab2(string modelPath, ComputeShader compute) : base(modelPath, true)
        {
            var odim0 = interpreter.GetOutputTensorInfo(0).shape;

            Debug.Assert(odim0[1] == height);
            Debug.Assert(odim0[2] == width);

            outputs0 = new float[odim0[1], odim0[2]];
            labelPixels = new Color32[width * height];
            labelTex2D = new Texture2D(width, height, TextureFormat.RGBA32, 0, false);
            ...

        }

       ....

        public override void Invoke(Texture inputTex)
        {
            ToTensor(inputTex, inputs);

            interpreter.SetInputTensorData(0, inputs);
            interpreter.Invoke();
            interpreter.GetOutputTensorData(0, outputs0);
        }

       ...

        public Texture2D GetResultTexture2D()
        {

            int rows = outputs0.GetLength(0); // y
            int cols = outputs0.GetLength(1); // x
            // int labels = outputs0.GetLength(2);
            for (int y = 0; y < rows; y++)
            {
                for (int x = 0; x < cols; x++)
                {
                    labelPixels[y * cols + x] = COLOR_TABLE[(int)outputs0[x,y]];
                }
            }

            labelTex2D.SetPixels32(labelPixels);
            labelTex2D.Apply();

            return labelTex2D;
        }

        ...
    }

But now i get an error from Interpreter.cs:

Exception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (System.Int32 resultCode) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:196)
TensorFlowLite.Interpreter.SetInputTensorData (System.Int32 inputTensorIndex, System.Array inputTensorData) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:122)
TensorFlowLite.DeepLab2.Invoke (UnityEngine.Texture inputTex) (at Assets/Samples/DeepLab513/DeepLab2.cs:97)
DeepLabSample2.Execute (UnityEngine.Texture texture) (at Assets/Samples/DeepLab513/DeepLabSample2.cs:45)
DeepLabSample2.Update () (at Assets/Samples/DeepLab513/DeepLabSample2.cs:40)

If you have any idea :)

Thank you in advance !
Best regards,
--Selim

@selimbek selimbek changed the title Trouble with Quantized TFLite models Troubles with Quantized TFLite models Jan 10, 2020
@asus4
Copy link
Owner

asus4 commented Jan 11, 2020

Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.

@asus4 asus4 added the question Further information is requested label Jan 11, 2020
@asus4 asus4 closed this as completed Feb 7, 2020
@klauspa
Copy link

klauspa commented May 17, 2021

Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.

I can't quantized my SSD model, how do I modify the code so that my 32bit model can be loaded in this project? many thanks

@sandhyacs
Copy link

Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.

I can't quantized my SSD model, how do I modify the code so that my 32bit model can be loaded in this project? many thanks

Hi @klauspa ,

Did you get the solution for this? even i am facing the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
custom model question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants