Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite? #15633

Closed
Nanamare opened this issue Dec 26, 2017 · 141 comments
Closed
Assignees
Labels
comp:lite TF Lite related issues type:feature Feature requests

Comments

@Nanamare
Copy link

Nanamare commented Dec 26, 2017

HI.

Developing an android application using tensorflow-lite.

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md
Not found detection model.

Also, I try to convert SSD-Inceptionv2 using tensorflow-lite-API. But there seems to be a problem.

##Command


bazel run --config=opt --copt=-msse4.1 --copt=-msse4.2 \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/home/danshin/tensorflow_lite/lite_model/fire_incpetion_v2.pb \
  --output_file=/home/danshin/tensorflow_lite/lite_model/fire_inception_v2.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --inference_type=FLOAT \
  --input_shape=1,300,300,3 \
  --input_array=image_tensor \
  --output_array={detection_boxes,detection_scores,detection_classes,num_detections}

##Error code


2017-12-26 14:59:25.159220: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 2029 operators, 3459 arrays (0 quantized)
2017-12-26 14:59:25.251633: F tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:95] Check failed: other_op->type == OperatorType::kTensorFlowMerge 

The fire_inception_v2 file is created, but its size is zero bytes.
What is a problem?

also,
please let me know what's the best way to deploy custom model for object detection?

Somebody help me plz!.

thank you.

@bignamehyp
Copy link
Member

@aselle can you please take a look at this issue? Thanks.

@bignamehyp bignamehyp added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Dec 28, 2017
@aselle aselle added the comp:lite TF Lite related issues label Dec 28, 2017
@aselle
Copy link
Contributor

aselle commented Dec 28, 2017

We are currently working to convert mobilenet SSD (and then inception ssd after that) , but it contains ops that are not supported completely. I will update this issue once we have that done.

@aselle aselle self-assigned this Dec 28, 2017
@aselle aselle added the type:feature Feature requests label Dec 28, 2017
@mpeniak
Copy link

mpeniak commented Jan 8, 2018

Great, I have asked similar question here: #14731

How long do you reckon until you guys add support from ssd-mobilenet?

Thanks,
Martin Peniak

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jan 23, 2018
@tensorflowbutler
Copy link
Member

A member of the TensorFlow organization has replied after the stat:awaiting tensorflower label was applied.

@mpeniak
Copy link

mpeniak commented Jan 23, 2018

?

@tensorflowbutler
Copy link
Member

Nagging Assignee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@arn197
Copy link

arn197 commented Feb 12, 2018

Any updates?
I'm also facing a similar issue. Thanks in advance.

@domidataguy
Copy link

@yucheeling

@rana3579
Copy link

rana3579 commented Mar 6, 2018

Could you please suggest any dataset like "ssd_mobilenet_v1_coco_2017_11_17.tar" which can be used in a retail shop for different apparel identification like t-shirts, jeans etc.

@aselle
Copy link
Contributor

aselle commented Mar 7, 2018

@rana3579, please ask such a question on stackoverflow. A quick update on mobilenet ssd. This is progressing and we hope we will have an example out soon.

@mpeniak
Copy link

mpeniak commented Mar 7, 2018

@rana3579 check my video, got this running on movidius, nvidia gpus as well as arm processors. I cannot share the dataset but if you are part of a company we could talk about potential collaboration: https://www.youtube.com/watch?v=3MinI9cCJrc

@mpeniak
Copy link

mpeniak commented Mar 7, 2018

@aselle thanks for the update! Where to look for the notifications on this? I would like to be notified as soon as it is out if that is possible. Thank you, I appreciate your hard-work on this!

@aselle
Copy link
Contributor

aselle commented Mar 9, 2018

@andrewharp, is working on this and will be updating the Java TF Mobile app to use tflite. So watch for those changes in the repository. I'll leave this issue open for now.

@aselle aselle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Mar 9, 2018
@andrewharp
Copy link
Contributor

This is functional internally; should have something out in the next week or two.

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Mar 10, 2018
@madhavajay
Copy link

@andrewharp thats awesome!! Does that also go for the iOS camera example?
Also what is the size of the weights and performance looking like?
The TFLite classification mobilenet is tiny and the performance on iOS is buttery smooth so im really excited for TFLite.

Some others already converted the existing SSD Mobilenet pb to a coreml model and wrote the missing output layers in Swift:
https://github.com/vonholst/SSDMobileNet_CoreML

But thats only really like 8-12 fps on an iPhone 7.

@pathwayai
Copy link

Hi,
Any update on this?

@cpdiku
Copy link

cpdiku commented Mar 26, 2018

I am also curious :)

@andrewharp
Copy link
Contributor

I have a commit porting the Android TF demo to tflite currently under review, should show up on github this week hopefully.

@madhavajay It's Android only, but you should be able to adapt it for iOS. The only thing is that some of the pre-processing (image resizing/normalization) and post-processing (non-max suppression and adjustment by box priors) is done in Java as tflite doesn't fully support all the operators used by MobileNet SSD.

@madhavajay
Copy link

madhavajay commented Mar 26, 2018

@andrewharp That’s awesome. Can you briefly explain why those operations are not available currently in TF lite. Seems the same case for the tfcoreml conversion tool on regular SSD. Not complaining just asking out of technical interest, do they do something that’s particularly difficult to implement in the mobile stack or is it just low priority?

@madhavajay
Copy link

Looking forwards to seeing your epic effort on the Android code!!! Thanks a lot. I know im not the only one looking forwards to this!

@grewe
Copy link

grewe commented Mar 29, 2018

@andrewharp, and @aselle Any update on getting demo for using SSD based Object Localization example for TFLite?

@andrewharp
Copy link
Contributor

andrewharp commented Mar 31, 2018

It's live now at tensorflow/contrib/lite/examples/android! This is a more complete port of the original TF Android demo (only lacking the Stylize example), and will be replacing the other demo in tensorflow/contrib/lite/java/demo going forward.

A converted TF Lite flatbuffer can be found in mobilenet_ssd_tflite_v1.zip, and you can find the Java inference implementation in TFLiteObjectDetectionAPIModel.java. Note that this differs from the original TF implementation in that the boxes must be manually decoded in Java, and a box prior txt file needs to be packaged in the apps assets (I think the one included in the model zip above should be valid for most graphs).

During TOCO conversion a different input node (Preprocessor/sub) is used, as well as different output nodes (concat,concat_1). This skips some parts that are problematic for tflite, until either the graph is restructured or TF Lite reaches TF parity.

Here are the quick steps for converting an SSD MobileNet model to tflite format and building the demo to use it:

# Download and extract SSD MobileNet model
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz
tar -xvf ssd_mobilenet_v1_coco_2017_11_17.tar.gz 
DETECT_PB=$PWD/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb
STRIPPED_PB=$PWD/frozen_inference_graph_stripped.pb
DETECT_FB=$PWD/tensorflow/contrib/lite/examples/android/assets/mobilenet_ssd.tflite

# Strip out problematic nodes before even letting TOCO see the graphdef
bazel run -c opt tensorflow/python/tools/optimize_for_inference -- \
--input=$DETECT_PB  --output=$STRIPPED_PB --frozen_graph=True \
--input_names=Preprocessor/sub --output_names=concat,concat_1 \
--alsologtostderr

# Run TOCO conversion.
bazel run tensorflow/contrib/lite/toco:toco -- \
--input_file=$STRIPPED_PB --output_file=$DETECT_FB \
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
--input_shapes=1,300,300,3 --input_arrays=Preprocessor/sub \
--output_arrays=concat,concat_1 --inference_type=FLOAT --logtostderr

# Build and install the demo
bazel build -c opt --cxxopt='--std=c++11' //tensorflow/contrib/lite/examples/android:tflite_demo
adb install -r -f bazel-bin/tensorflow/contrib/lite/examples/android/tflite_demo.apk

@ashwaniag
Copy link

@achowdhery It is my own dataset. I trained for mobilenetv2 architecture. When I run the .pb model (tensorflow model), I get
Not found: Op type not registered 'NonMaxSuppressionV3' in binary running on VAL5-04. Make sure the Op and Kernel are registered in the binary running in this process.

Do you think its related?

@achowdhery
Copy link

@ashwaniag Please open a new bug and provide exact reproducible instructions

@achraf-boussaada
Copy link

@ashwaniag check these both issues, i had a similar problem : #10254 and #19854

@ashwaniag
Copy link

ashwaniag commented Aug 15, 2018

@achraf-boussaada Thank you! I fixed it. It was a version mismatch issue.
@achowdhery Now, the problem is that the full tensorflow model gives me great results but the tflite model gives very bad results.

@achowdhery
Copy link

@ashwaniag Please define very bad results. Do you have small objects? Please attach a model checkpoint, pipeline config and label file as well as a sample image to help us reproduce the issue. Thanks

@zhyj3038
Copy link

@oopsodd hello, I get a wrong class index either . it complained "java.lang.ArrayIndexOutOfBoundsException: length=10; index=-739161663", Can you help me ?

@bairesearch
Copy link

Note I have created TensorFlow Lite SSD (Object Detection) minimal working examples for iOS and Android; https://github.com/baxterai/tfliteSSDminimalWorkingExample. The iOS version is based on obj_detect_lite.cc by YijinLiu (with nms function by WeiboXu), and the Android version is based on https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/examples/android tflDetect. It removes all overhead like the internal camera, and isolates the core code required to detect objects and display the detection boxes.

@JaviBonilla
Copy link

@baxterai great work! thanks, I will test it.

@Georg-W
Copy link

Georg-W commented Aug 27, 2018

Thanks for your amazing work everybody! I have another question regarding the recently added postprocessing operation.

The output of the pretrained ssd_mobilenet_v1_quantized_coco
is currently limited to the top 10 detections in the frame, even though the default configs in models/research/object_detection/samples/configs/ like
ssd_mobilenet_v1_quantized_300x300_coco14_sync.config all specify a higher limit of total detections.

post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SIGMOID }

is this resolved by retraining the network with this pipeline configuration or is the dimensionality of
'TFLite_Detection_PostProcess' fixed to 10 by other configurations?

@achowdhery
Copy link

@Georg-W You will need to change max detection in export_tflite_ssd_graph.py as well. There is a command line option.

@Georg-W
Copy link

Georg-W commented Aug 29, 2018

@achowdhery ah thank you ! Thats what I missed.

@KaviSanth
Copy link

@andrewharp Thank you so much for your cutosm inference class TFLiteObjectDetectionAPIModel.java , I've tried it with your ssd mobilenet v1 tflite mobilenet_ssd_tflite_v1.zip but when the app starts seems there is problem in the function recognizeImage(final Bitmap bitmap) when i call tfLite.runForMultipleInputsOutputs(inputArray, outputMap); it throws this exception

07-18 10:37:02.416 19957-19996/com.app.cerist.realtimeobjectdetectionapi E/AndroidRuntime: FATAL EXCEPTION: Camera
    Process: com.app.cerist.realtimeobjectdetectionapi, PID: 19957
    java.lang.IllegalArgumentException: Output error: Outputs do not match with model outputs.
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:170)
        at com.app.cerist.realtimeobjectdetectionapi.ImageClassifierTFLiteAPI.recognizeImage(ImageClassifierTFLiteAPI.java:207)
        at com.app.cerist.realtimeobjectdetectionapi.MainActivity.classifyFrame(MainActivity.java:421)
        at com.app.cerist.realtimeobjectdetectionapi.MainActivity.access$1000(MainActivity.java:48)
        at com.app.cerist.realtimeobjectdetectionapi.MainActivity$4.run(MainActivity.java:455)
        at android.os.Handler.handleCallback(Handler.java:739)
        at android.os.Handler.dispatchMessage(Handler.java:95)
        at android.os.Looper.loop(Looper.java:159)
        at android.os.HandlerThread.run(HandlerThread.java:61)
07-18 10:37:02.436 19957-19996/com.app.cerist.realtimeobjectdetectionapi V/Process: killProcess [19957] Callers=com.android.internal.os.RuntimeInit$UncaughtHandler.uncaughtException:99 java.lang.ThreadGroup.uncaughtException:693 java.lang.ThreadGroup.uncaughtException:690 <bottom of call stack> 
07-18 10:37:02.436 19957-19996/com.app.cerist.realtimeobjectdetectionapi I/Process: Sending signal. PID: 19957 SIG: 9

the error said that the length of outputs array is bigger than the length of inputs array
Here is the condition in Interpreter.java

public void runForMultipleInputsOutputs(Object[] inputs, @NonNull Map<Integer, Object> outputs) {
        if (this.wrapper == null) {
            throw new IllegalStateException("Internal error: The Interpreter has already been closed.");
        } else {
            Tensor[] tensors = this.wrapper.run(inputs);
            if (outputs != null && tensors != null && outputs.size() <= tensors.length) {
                int size = tensors.length;
                Iterator var5 = outputs.keySet().iterator();
            }
       }
}

and this is my inputs and outputs arrays :

d.imgData = ByteBuffer.allocateDirect(1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel);
d.imgData.order(ByteOrder.nativeOrder());
d.intValues = new int[d.inputSize * d.inputSize];
 imgData.rewind();
        for (int i = 0; i < inputSize; ++i) {
            for (int j = 0; j < inputSize; ++j) {
                int pixelValue = intValues[i * inputSize + j];
                if (isModelQuantized) {
                    // Quantized model
                    imgData.put((byte) ((pixelValue >> 16) & 0xFF));
                    imgData.put((byte) ((pixelValue >> 8) & 0xFF));
                    imgData.put((byte) (pixelValue & 0xFF));
                } else { // Float model
                    imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
                    imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
                    imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);

The outputs array :

// Copy the input data into TensorFlow.
        Trace.beginSection("feed");
        outputLocations = new float[1][NUM_DETECTIONS][4];
        outputClasses = new float[1][NUM_DETECTIONS];
        outputScores = new float[1][NUM_DETECTIONS];
        numDetections = new float[1];

        Object[] inputArray = {imgData};
        Map<Integer, Object> outputMap = new HashMap<>();
        outputMap.put(0, outputLocations);
        outputMap.put(1, outputScores);
        outputMap.put(2, numDetections);
        outputMap.put(3, outputClasses);
        Trace.endSection();

And the Inference :

// Run the inference call.
        Trace.beginSection("run");
        Log.d("TAG_INPUT",""+String.valueOf(inputArray.length));
        Log.d("TAG_OUTPUT",""+String.valueOf(outputMap.size()));

        tfLite.runForMultipleInputsOutputs(inputArray, outputMap);
        Trace.endSection();

I didn't understand the meaning of this Error cuz i did exactly the same as your TFLiteObjectDetectionAPIModel.java class .
thank you for Help

i have the same issue.. got solution?
thanks..

@SteveIb
Copy link

SteveIb commented Nov 25, 2018

@Georg-W You will need to change max detection in export_tflite_ssd_graph.py as well. There is a command line option.

Hi

I'm trying to detect more than 10 objects in the image ( which is default )
I'm usin the following commands:
bazel run -c opt tensorflow/contrib/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/mobile_net_500.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=FLOAT --max_detections=500 --max_classes_per_detection=1 --allow_custom_ops

I also modified
export_tflite_ssd_graph.py
flags.DEFINE_integer('max_detections', 500 <--- instead of 10,
'Maximum number of detections (boxes) to show.')
flags.DEFINE_integer('max_classes_per_detection', 1,
'Number of classes to display per detection box.')

but still giving 10 objects as output in the android [1,10,4].

any idea?

@defaultUser3214
Copy link

I would be also interested in the solution of @KaviSanth issue.

@achowdhery
Copy link

This solution of @stevelb should work. You may want to visualize the frozen graph to make sure that max_detections is set correctly.

@defaultUser3214
Copy link

defaultUser3214 commented Jan 23, 2019

@achowdhery Thank you for your reply. I tried to execute the commands written by @andrewharp but I get the following error. Indeed, toco isn't located at this place. I am using the master version and the r1.95 version from the github repository.

bazel run tensorflow/contrib/lite/toco:toco -- --input_file=$STRIPPED_PB --output_file=$DETECT_FB --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shapes=1,300,300,3 --input_arrays=Preprocessor/sub --output_arrays=concat,concat_1 --inference_type=FLOAT --logtostderr
INFO: Invocation ID: 0e58a5ef-9fee-4619-b760-aeb1c83c9661
ERROR: Skipping 'tensorflow/contrib/lite/toco:toco': no such package 'tensorflow/contrib/lite/toco': BUILD file not found on package path
WARNING: Target pattern parsing failed.
ERROR: no such package 'tensorflow/contrib/lite/toco': BUILD file not found on package path
INFO: Elapsed time: 0.179s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
I have to amend that I am executing those commands from my local tensorflow folder that was pulled from the git.

I could find a toco under tensorflow/lite/toco and I am just testing whether it works.
ok, it seems to work using this toco and apart from that you have to change the $DETECT_FB path to $PWD/ssd_mobilenet.tflite since in the contrib/lite folder only some python is located an nothing else.

@defaultUser3214
Copy link

defaultUser3214 commented Jan 23, 2019

There appears a runtime error when adding the .tflite file in the DetectorActivity from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java) with the line

private static final String TF_OD_API_MODEL_FILE =
            "file:///android_asset/ssd_mobilenet_v1.tflite";

E/AndroidRuntime: FATAL EXCEPTION: main
Process: myProcess, PID: 32611
java.lang.RuntimeException: Failed to find input Node 'image_tensor'
at myPackage.myClass.TensorFlowObjectDetectionAPIModel.create(TensorFlowObjectDetectionAPIModel.java:106)

Is it not possible to use .tflite models in that app?

@achowdhery
Copy link

@defaultUser3214 you are using a classifier model in the detection app. MobileNet v1 is classification model. Please use MobileNet SSD model

@defaultUser3214
Copy link

defaultUser3214 commented Jan 23, 2019

@achowdhery Thank you! Using the model from wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz resulted in that error. But I thought that this was the ssd version?

But using the ssd_mobilenet_v1_android_export.pb converted to .tflite that worked as .pb before produces the same error.

@achowdhery
Copy link

@defaultUser3214 Thats an old version of the model that will not work in latest demo app released in July 2018. Please download the latest models in July 2018 in detection model zoo : they do work in the app. Please open a new issue if this is still blocked.

@AdamWP
Copy link

AdamWP commented Feb 7, 2019

@SteveIb You also need to change NUM_DETECTIONS = 500 in TFLiteObjectDetectionAPIModel.java

@bhamapillutla
Copy link

not able to convert ssdmobilenet v1 .pb to .tflite
pb generated through Tensorflow object detection api @aselle @achowdhery

@CianShev
Copy link

Any progress on this? Trying to convert frozen_inference_graph.pb to .TFLITE file but getting error

java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 49152 bytes and a ByteBuffer with 270000 bytes

For custom object detection in Android. Any ideas on different conversion methods? Transfer learned ssd_mobilenet_v1_pets on Windows 10 following the tutorial here: https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

@CianShev
Copy link

Any progress on this? Trying to convert frozen_inference_graph.pb to .TFLITE file but getting error

java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 49152 bytes and a ByteBuffer with 270000 bytes

For custom object detection in Android. Any ideas on different conversion methods? Transfer learned ssd_mobilenet_v1_pets on Windows 10 following the tutorial here: https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10

Just to follow up on this and to help anyone else who was having the same error - this is caused by using an incorrect model checkpoint to train from. To work on Android with .tflite, the initial model must MobileNet and must also be quantized and will have this section of code or something similar in the .config file:

graph_rewriter { quantization { delay: 48000 weight_bits: 8 activation_bits: 8 } }

@Suraj520
Copy link

It's live now at tensorflow/contrib/lite/examples/android! This is a more complete port of the original TF Android demo (only lacking the Stylize example), and will be replacing the other demo in tensorflow/contrib/lite/java/demo going forward.

A converted TF Lite flatbuffer can be found in mobilenet_ssd_tflite_v1.zip, and you can find the Java inference implementation in TFLiteObjectDetectionAPIModel.java. Note that this differs from the original TF implementation in that the boxes must be manually decoded in Java, and a box prior txt file needs to be packaged in the apps assets (I think the one included in the model zip above should be valid for most graphs).

During TOCO conversion a different input node (Preprocessor/sub) is used, as well as different output nodes (concat,concat_1). This skips some parts that are problematic for tflite, until either the graph is restructured or TF Lite reaches TF parity.

Here are the quick steps for converting an SSD MobileNet model to tflite format and building the demo to use it:

# Download and extract SSD MobileNet model
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz
tar -xvf ssd_mobilenet_v1_coco_2017_11_17.tar.gz 
DETECT_PB=$PWD/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb
STRIPPED_PB=$PWD/frozen_inference_graph_stripped.pb
DETECT_FB=$PWD/tensorflow/contrib/lite/examples/android/assets/mobilenet_ssd.tflite

# Strip out problematic nodes before even letting TOCO see the graphdef
bazel run -c opt tensorflow/python/tools/optimize_for_inference -- \
--input=$DETECT_PB  --output=$STRIPPED_PB --frozen_graph=True \
--input_names=Preprocessor/sub --output_names=concat,concat_1 \
--alsologtostderr

# Run TOCO conversion.
bazel run tensorflow/contrib/lite/toco:toco -- \
--input_file=$STRIPPED_PB --output_file=$DETECT_FB \
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
--input_shapes=1,300,300,3 --input_arrays=Preprocessor/sub \
--output_arrays=concat,concat_1 --inference_type=FLOAT --logtostderr

# Build and install the demo
bazel build -c opt --cxxopt='--std=c++11' //tensorflow/contrib/lite/examples/android:tflite_demo
adb install -r -f bazel-bin/tensorflow/contrib/lite/examples/android/tflite_demo.apk

This works like a charm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues type:feature Feature requests
Projects
None yet
Development

No branches or pull requests