Skip to content

Commit 34c5cb7

Browse files
committed
created branch for tf1
1 parent 8beb5eb commit 34c5cb7

File tree

3 files changed

+7
-101
lines changed

3 files changed

+7
-101
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.idea

README.md

Lines changed: 1 addition & 96 deletions
Original file line numberDiff line numberDiff line change
@@ -309,106 +309,11 @@ python export_inference_graph.py --input_type image_tensor --pipeline_config_pat
309309

310310
XXXX represents the highest number.
311311

312-
### 8. Exporting Tensorflow Lite model
313-
314-
If you want to run the model on a edge device like a Raspberry Pi or if you want to run it on a smartphone it's a good idea to convert your model to Tensorflow Lite format. This can be done with with the ```export_tflite_ssd_graph.py``` file.
315-
316-
```bash
317-
mkdir inference_graph
318-
319-
python export_inference_graph.py --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph --add_postprocessing_op=true
320-
```
321-
322-
After executing the command, there should be two new files in the inference_graph folder. A tflite_graph.pb and a tflite_graph.pbtxt file.
323-
324-
Now you have a graph architecture and network operations that are compatible with Tensorflow Lite. To finish the convertion you now need to convert the actual model.
325-
326-
### 9. Using TOCO to Create Optimzed TensorFlow Lite Model
327-
328-
To convert the frozen graph to Tensorflow Lite we need to run it through the Tensorflow Lite Optimizing Converter (TOCO). TOCO converts the model into an optimized FlatBuffer format that runs efficiently on Tensorflow Lite.
329-
330-
For this to work you need to have Tensorflow installed from scratch. This is a tedious task which I wouldn't cover in this tutorial. But you can follow the [official installation guide](https://www.tensorflow.org/install/source_windows). I'd recommend you to create a [Anaconda Environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) specificly for this purpose.
331-
332-
After building Tensorflow from scratch you're ready to start the with the conversation.
333-
334-
#### 9.1 Create Tensorflow Lite model
335-
336-
To create a optimized Tensorflow Lite model we need to run TOCO. TOCO is locate in the tensorflow/lite directory, which you should have after install Tensorflow from source.
337-
338-
If you want to convert a quantized model you can run the following command:
339-
340-
```bash
341-
export OUTPUT_DIR=/tmp/tflite
342-
bazel run --config=opt tensorflow/lite/toco:toco -- \
343-
--input_file=$OUTPUT_DIR/tflite_graph.pb \
344-
--output_file=$OUTPUT_DIR/detect.tflite \
345-
--input_shapes=1,300,300,3 \
346-
--input_arrays=normalized_input_image_tensor \
347-
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
348-
--inference_type=QUANTIZED_UINT8 \
349-
--mean_values=128 \
350-
--std_values=128 \
351-
--change_concat_input_ranges=false \
352-
--allow_custom_ops
353-
```
354-
355-
If you are using a floating point model like a faster rcnn you'll need to change to command a bit:
356-
357-
```bash
358-
export OUTPUT_DIR=/tmp/tflite
359-
bazel run --config=opt tensorflow/lite/toco:toco -- \
360-
--input_file=$OUTPUT_DIR/tflite_graph.pb \
361-
--output_file=$OUTPUT_DIR/detect.tflite \
362-
--input_shapes=1,300,300,3 \
363-
--input_arrays=normalized_input_image_tensor \
364-
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
365-
--inference_type=FLOAT \
366-
--allow_custom_ops
367-
```
368-
369-
If you are working on Windows you might need to remove the ' if the command doesn't work. For more information on how to use TOCO check out [the official instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md).
370-
371-
#### 9.2 Create new labelmap for Tensorflow Lite
372-
373-
Next you need to create a label map for Tensorflow Lite, since it doesn't have the same format as a classical Tensorflow labelmap.
374-
375-
Tensorflow labelmap:
376-
377-
```bash
378-
item {
379-
name: "a"
380-
id: 1
381-
display_name: "a"
382-
}
383-
item {
384-
name: "b"
385-
id: 2
386-
display_name: "b"
387-
}
388-
item {
389-
name: "c"
390-
id: 3
391-
display_name: "c"
392-
}
393-
```
394-
395-
The Tensorflow Lite labelmap format only has the display_names (if there is no display_name the name is used).
396-
397-
```bash
398-
a
399-
b
400-
c
401-
```
402-
403-
So basically the only thing you need to do is to create a new labelmap file and copy the display_names (names) from the other labelmap file into it.
404-
405-
### 10. Using the model for inference
312+
### 8. Using the model for inference
406313

407314
After training the model it can be used in many ways. For examples on how to use the model check out my other repositories.
408315

409316
* [Inference with Tensorflow 1.x](https://github.com/TannerGilbert/Tutorials/tree/master/Tensorflow%20Object%20Detection)
410-
* [Tensorflow-Object-Detection-with-Tensorflow-2.0](https://github.com/TannerGilbert/Tensorflow-Object-Detection-with-Tensorflow-2.0)
411-
* [Run TFLite model with EdgeTPU](https://github.com/TannerGilbert/Google-Coral-Edge-TPU/blob/master/tflite_object_detection.py)
412317

413318
## Appendix
414319

training/faster_rcnn_inception_v2_pets.config

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ train_config: {
103103
use_moving_average: false
104104
}
105105
gradient_clipping_by_norm: 10.0
106-
fine_tune_checkpoint: "C:/Users/Gilbert/Desktop/Programming/models/research/object_detection/training/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
106+
fine_tune_checkpoint: "<path>/object_detection/training/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
107107
from_detection_checkpoint: true
108108
load_all_detection_checkpoint_vars: true
109109
# Note: The below line limits the training process to 200K steps, which we
@@ -120,9 +120,9 @@ train_config: {
120120

121121
train_input_reader: {
122122
tf_record_input_reader {
123-
input_path: "C:/Users/Gilbert/Desktop/Programming/models/research/object_detection/train.record"
123+
input_path: "<path>/object_detection/train.record"
124124
}
125-
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
125+
label_map_path: "<path>/object_detection/training/labelmap.pbtxt"
126126
}
127127

128128
eval_config: {
@@ -132,9 +132,9 @@ eval_config: {
132132

133133
eval_input_reader: {
134134
tf_record_input_reader {
135-
input_path: "C:/Users/Gilbert/Desktop/Programming/models/research/object_detection/test.record"
135+
input_path: "<path>/object_detection/test.record"
136136
}
137-
label_map_path: "C:/Users/Gilbert/Desktop/Programming/models/research/object_detection/training/labelmap.pbtxt"
137+
label_map_path: "<path>/object_detection/training/labelmap.pbtxt"
138138
shuffle: false
139139
num_readers: 1
140140
}

0 commit comments

Comments
 (0)