Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors loading inception v3 in iOS example #3480

Closed
Piso opened this issue Jul 24, 2016 · 16 comments

Comments

@Piso
Copy link

commented Jul 24, 2016

Environment info

Operating System: Mac OS X / iOS

If installed from source, provide

  1. The commit hash (git rev-parse HEAD) : fc91629
  2. The output of bazel version: Build label: 0.3.0-homebrew

Steps to reproduce

  1. Download the .pb file from https://storage.googleapis.com/download.tensorflow.org/models/inception_dec_2015.zip
  2. Insert the .pb file in the data folder of the camera iOS project
  3. Launch the project from Xcode, console outputs following errors:

Running model failed:Invalid argument: Session was not created with a graph before Run()!

Running model failed:Invalid argument: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs [[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false](DecodeJpeg/contents)]]

What have you tried?

  1. ran the following script referenced in #2883:

bazel build tensorflow/python/tools:strip_unused && \ bazel-bin/tensorflow/python/tools/strip_unused \ --input_graph=your_retrained_graph.pb \ --output_graph=stripped_graph.pb \ --input_node_names=Mul \ --output_node_names=final_result \ --input_binary=true

However, I receive the following error:

/tensorflow/bazel-bin/tensorflow/python/tools/strip_unused.runfiles/org_tensorflow/tensorflow/python/framework/graph_util.py", line 156, in extract_sub_graph assert d in name_to_node_map, "%s is not in graph" % d AssertionError: final_result is not in graph

@Tugees

This comment has been minimized.

Copy link

commented Jul 26, 2016

Hi Piso. Use pool_3 as the input_graph instead of final_result.

I also have been trying to get the iOS sample to work with Inception v3 but I'm getting the sample prediction no matter the image captured by the camera. So if you manage to get it running may you please help me with my issue #3446. Thanks

@Tugees

This comment has been minimized.

Copy link

commented Jul 26, 2016

Here is the full code you need to use
bazel-bin/tensorflow/python/tools/strip_unused --input_graph= your_retrained_graph.pb --output_graph=stripped_graph.pb --input_node_names=Mul --output_node_names=pool_3 --input_binary=true

@shrutisharmavsco

This comment has been minimized.

Copy link

commented Aug 19, 2016

@Piso try running this from the root of your tensorflow repo:

bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=tensorflow/contrib/ios_examples/camera/data/tensorflow_inception_graph.pb \
--output_graph=tensorflow/contrib/ios_examples/camera/data/tensorflow_inception_graph_stripped.pb \
--input_node_names=Mul \
--output_node_names=softmax \
--input_binary=true

Then change the name of the graph being loaded in CameraExampleViewController.mm
Specifically, change this line:

tensorflow::Status load_status =
      LoadModel(@"tensorflow_inception_graph", @"pb", &tf_session);

to

tensorflow::Status load_status =
      LoadModel(@"tensorflow_inception_graph_stripped", @"pb", &tf_session);
@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2016

@Tugees @shrutisharmavsco and @Piso

  1. Are you able to run bazel-bin/tensorflow/python/tools/strip_unused on TensorFlow 0.10.0rc? (I just created an issue #3962)
  2. Are you able to run the stripped Inception V3 model on iOS device without crashing?

Thanks,
Jeff

@shrutisharmavsco

This comment has been minimized.

Copy link

commented Aug 22, 2016

@jeffxtang

  1. I haven't tried to run strip_unused script on TensorFlow 0.10.0rc
  2. I was able to run stripped Inception V3 model on iOS without crashing but I didn't try it in the camera example, only in the simple example where it tags only an image at a time
@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2016

Thanks @shrutisharmavsco - I used the same command you posted (actually I tried both --output_node_names=softmax and --output_node_names=pool_3 and then used the stripped model in the tf_ios simple example (with changes to std::string input_layer and std::string output_layer as well as changes to const int wanted_width etc as specified by @petewarden on #2883, but the app simply crashed on my iPhone 6 device - similar screenshot to #2927

So @shrutisharmavsco you can run the stripped V3 model on actual iOS device with recognition result generated? Anything you did differently from what I did above? Thanks!

@shrutisharmavsco

This comment has been minimized.

Copy link

commented Aug 23, 2016

@jeffxtang that all sounds like the right steps to me. I don't think I did anything differently. I was running it on an iPhone 5C. I do get memory warnings generated but the app does not crash - it still generates the tags. I haven't quantized the model yet, though.

@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2016

Thanks @shrutisharmavsco for the info. When you have the chance can you try quantizing the model and see if the memory warnings will disappear running on your iPhone 5C or if you'll hit problem similar to #3619?

@shrutisharmavsco

This comment has been minimized.

Copy link

commented Sep 2, 2016

@jeffxtang running the quantized model made the memory warnings disappear on the iPhone 5C

@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Sep 2, 2016

@shrutisharmavsco cool. I'm able to run the quantized model on my iPhone 6 too without memory warnings or other problems.

@shrutisharmavsco

This comment has been minimized.

Copy link

commented Sep 6, 2016

@jeffxtang are you noticing a speed decrease running the quantized model vs the non-quantized one?

@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Sep 13, 2016

No @shrutisharmavsco the performance seems to be the same to me. You can check out my recently released iOS app which uses a quantized model for comparison: https://itunes.apple.com/us/app/dog-breeds-recognition-powered/id1150923794?mt=8

@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Sep 24, 2016

For those of you who may be interested, I just posted a blog documenting the whole process of using a retrained inception v3 model for my app above at http://jeffxtang.github.io

@petewarden

This comment has been minimized.

Copy link
Member

commented Sep 28, 2016

I've got a tutorial explaining how to get this running up at https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ now, so I'm going to close this bug. Please open new bugs if there are issues with the process. Thanks @jeffxtang for your work on this too!

@petewarden petewarden closed this Sep 28, 2016

@jeffxtang

This comment has been minimized.

Copy link
Contributor

commented Sep 28, 2016

Great tutorial @petewarden! I noticed you use optimize_for_inference instead of strip_unused and also convert_graphdef_memmapped_format - is this the recommended way now? Since TensorFlow 0.10.0? Thanks!

@petewarden

This comment has been minimized.

Copy link
Member

commented Sep 28, 2016

Yes, those are the recommended approaches. I'm working on a general description of the process, I'll email you a draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.