-
-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deeplab cityscape edgetpu #3
Comments
At the moment, deeplabv3 has not succeeded in full integer quantization, but I will check a little when I come home. |
@Valdiolus Tensorflow v1.15.0-GPU $ cd models/research
$ export PYTHONPATH=`pwd`:`pwd`/slim:$PYTHONPATH $ nano export_model.py
# input_preprocess takes 4-D image tensor as input.
#input_image = tf.placeholder(tf.uint8, [1, None, None, 3], name=_INPUT_NAME)
input_image = tf.placeholder(tf.float32, [1, 513, 513, 3], name=_INPUT_NAME) $ nano input_preprocess.py
#processed_image = tf.cast(image, tf.uint8)
processed_image = image $ python3 deeplab/export_model.py \
--checkpoint_path=./model.ckpt-30000 \
--export_path=./frozen_inference_graph.pb Tensorflow v2.1.0 self-build v1-api for Ubuntu 18.04 $ sudo pip3 install tensorflow-2.1.0-cp36-cp36m-linux_x86_64.whl import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
def representative_dataset_gen():
for data in raw_test_data.take(10):
image = data['image'].numpy()
image = tf.image.resize(image, (513, 513))
image = image[np.newaxis,:,:,:]
yield [image]
tf.compat.v1.enable_eager_execution()
raw_test_data, info = tfds.load(name="voc/2007", with_info=True, split="validation", data_dir="~/TFDS", download=True)
graph_def_file="frozen_inference_graph.pb"
input_arrays=["ImageTensor"]
output_arrays=['ResizeBilinear_2','SemanticProbabilities']
input_tensor={"ImageTensor":[1,513,513,3]}
# Integer Quantization - Input/Output=uint8
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays,input_tensor)
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_quant_model = converter.convert()
with open('./deeplabv3_mnv2_pascal_trainval_513_full_integer_quant.tflite', 'wb') as w:
w.write(tflite_quant_model)
print("Integer Quantization complete! - deeplabv3_mnv2_pascal_trainval_513_full_integer_quant.tflite") deeplabv3_mnv2_pascal_trainval_513_full_integer_quant.tflite $ edgetpu_compiler -s deeplabv3_mnv2_pascal_trainval_513_full_integer_quant.tflite
Edge TPU Compiler version 2.0.291256449
Internal compiler error. Aborting! |
Thank you, I have found a model from here and It works, all conversions are done successfully. but it's a pascal dataset. They show separately 8bit and non-8bit models. |
I converted frozen_inference_graph.tflite with edgetpu_compiler and it succeeded. Is this a problem? I understand that retraining is required using the cityscapes dataset. |
8bit .pb pascal file from tensorflow github converted to edgetpu.tflite successful, but I need cityscape pretrained. Will try to do Quantization-aware training. Thank you! |
@Valdiolus |
Wow, thank you very much! I did retrain too, but the quality was bad. |
Hi! Thank you for your work, it helps me a lot!
Now I am looking for deeplab model with cityscape pretrained, optimized for edgetpu (edgetpu.tflite)
Trying to build it by myself, but still no luck. I see you have a cityscape quant 257 and 769 - it would be ideally fit my use case. I'm trying to convert in in edgetpu.tflite file, but "Model not quantized".
If you can help me it would be awesome!
The text was updated successfully, but these errors were encountered: