You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly thanks for this repo, exemples are clear and works perflectly well. I succesfully integrate my own UNET model using your Deeplab scene implementation.
But today i tried to get more deeper inside optimisation. And i tried to use Quantized models.
I took mobilenetv2_coco_voc_trainaug_8bit from the official list (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md) And injected it to the Deeplab scene.
I fixed some issues do to the fact that ArgMax is already made inside the model and the output is [1, 513, 513] instead of the previously [1, 513, 513, 21]
Your DeepLab class became:
public class DeepLab2 : BaseImagePredictor<float>
{
...
float[,] outputs0; // height, width
...
public DeepLab2(string modelPath, ComputeShader compute) : base(modelPath, true)
{
var odim0 = interpreter.GetOutputTensorInfo(0).shape;
Debug.Assert(odim0[1] == height);
Debug.Assert(odim0[2] == width);
outputs0 = new float[odim0[1], odim0[2]];
labelPixels = new Color32[width * height];
labelTex2D = new Texture2D(width, height, TextureFormat.RGBA32, 0, false);
...
}
....
public override void Invoke(Texture inputTex)
{
ToTensor(inputTex, inputs);
interpreter.SetInputTensorData(0, inputs);
interpreter.Invoke();
interpreter.GetOutputTensorData(0, outputs0);
}
...
public Texture2D GetResultTexture2D()
{
int rows = outputs0.GetLength(0); // y
int cols = outputs0.GetLength(1); // x
// int labels = outputs0.GetLength(2);
for (int y = 0; y < rows; y++)
{
for (int x = 0; x < cols; x++)
{
labelPixels[y * cols + x] = COLOR_TABLE[(int)outputs0[x,y]];
}
}
labelTex2D.SetPixels32(labelPixels);
labelTex2D.Apply();
return labelTex2D;
}
...
}
Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.
Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.
I can't quantized my SSD model, how do I modify the code so that my 32bit model can be loaded in this project? many thanks
Hi @selimbek , thanks for using my repo. I think the error is caused by "the input data isn't correct shape or type". Please check the model's input. the input type will be ushort on the quantized model.
I can't quantized my SSD model, how do I modify the code so that my 32bit model can be loaded in this project? many thanks
Hi,
Firstly thanks for this repo, exemples are clear and works perflectly well. I succesfully integrate my own UNET model using your Deeplab scene implementation.
But today i tried to get more deeper inside optimisation. And i tried to use Quantized models.
I took mobilenetv2_coco_voc_trainaug_8bit from the official list (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md) And injected it to the Deeplab scene.
I fixed some issues do to the fact that ArgMax is already made inside the model and the output is [1, 513, 513] instead of the previously [1, 513, 513, 21]
Your DeepLab class became:
But now i get an error from Interpreter.cs:
If you have any idea :)
Thank you in advance !
Best regards,
--Selim
The text was updated successfully, but these errors were encountered: