Skip to content

iOS example DecodeJpeg issue with Image Retraining model #2883

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mat-peterson opened this issue Jun 15, 2016 · 20 comments
Closed

iOS example DecodeJpeg issue with Image Retraining model #2883

mat-peterson opened this issue Jun 15, 2016 · 20 comments
Assignees

Comments

@mat-peterson
Copy link

Environment info

Operating System: iOS

Steps to reproduce

  1. Follow the contrib/makefile/README to install the tensorflow iOS core lib
  2. Create my own model with the Image Retraining tutorial
  3. Run the iOS example, error is logged.

Logs or other output that would be helpful

Running model failed:Invalid argument: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs [[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false](DecodeJpeg/contents)]]

Related to

#2754 except that I want to use the pd file generated from the Image Retraining tutorial

@petewarden petewarden self-assigned this Jun 15, 2016
@petewarden
Copy link
Contributor

Sorry you're hitting problems! Since DecodeJpeg isn't supported as part of the core, you'll need to strip it out of the graph first. I'm working on a more user-friendly approach, but you should be able to run the strip_unused script on it, something like this:

bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=your_retrained_graph.pb \
--output_graph=stripped_graph.pb \
--input_node_names=Mul \
--output_node_names=final_result \
--input_binary=true

Let me know if that helps.

@mat-peterson
Copy link
Author

@petewarden
I used your strip commands. The got rid of the DecodeJpeg error. But a new error has appeared.
Running model failed: Not found: FeedInputs: unable to find feed output input

@petewarden
Copy link
Contributor

Great! You should just need to update the input and output layer names to "Mul" and "final_result" respectively, here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/ios_examples/camera/CameraExampleViewController.mm#L300

@mat-peterson
Copy link
Author

Awesome! That fixed the last error, but now I'm getting:

Running model failed: Invalid argument: computed output size would be negative [[Node: pool_3 = AvgPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](mixed_10/join)]]

@petewarden
Copy link
Contributor

Ah yes! The input sizes need to be 299, not 224. You'll also need to change the mean and std values both to 128. Here's the code I think you'll need:

  const int wanted_width = 299;
  const int wanted_height = 299;
  const int wanted_channels = 3;
  const float input_mean = 128.0f;
  const float input_std = 128.0f;

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/ios_examples/camera/CameraExampleViewController.mm#L272

We will be collecting this in proper documentation soon too, but thanks for testing this out.

@mat-peterson
Copy link
Author

It worked! Thanks for all the help.

@Tugees
Copy link

Tugees commented Jul 19, 2016

Hi,

I encountered the same errors and followed the suggestions. No longer getting any errors but I get the same prediction no matter what I point the camera at. What could be the problem?

@Zulqurnain24
Copy link

Zulqurnain24 commented Aug 7, 2016

After going through the procedure told by Pete I am getting this result:
"I /Users/mohammadzulqurnain/Downloads/tensorflow-master/tensorflow/contrib/ios_examples/camera/tensorflow_utils.mm:130] Session created.
I /Users/mohammadzulqurnain/Downloads/tensorflow-master/tensorflow/contrib/ios_examples/camera/tensorflow_utils.mm:133] Graph created.
I /Users/mohammadzulqurnain/Downloads/tensorflow-master/tensorflow/contrib/ios_examples/camera/tensorflow_utils.mm:149] Creating session.
W tensorflow/core/framework/op_def_util.cc:332] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
2016-08-07 06:17:54.548 CameraExample[7424:1747649] Received memory warning.”
"
And eventually the application crashes. Can someone help me out with this?
Thanx in advance.

@Shoshin23
Copy link

Just to let folks in the future know, the above steps outlined by @petewarden well for me. I was using macOS Sierra, Xcode 8.

@ghazi256
Copy link

worked for me as @petewarden instructions

@javadba
Copy link

javadba commented Apr 3, 2017

This is a lot of steps .. is there anything more "baked" / mature that is closer to working out of the box?

@scm-ns
Copy link

scm-ns commented Apr 29, 2017

petewarden's solutions work for me. There is an additional memory consumption error on iOS devices. I have added a comment here mortenjust/trainer-mac#3 and pete talks about it more here :
#4255

@scm-ns
Copy link

scm-ns commented Apr 29, 2017

@Zulqurnain24 The app is crashing due to apple force closing it due to the tf model taking up a lot of memory. Solution here : #4255 and ^ comment.

@Zulqurnain24
Copy link

Thanx @scm-ns I have followed these instructions https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ to reduce the footprint of the graph file and now it is working fine

@lxtGH
Copy link

lxtGH commented May 18, 2017

Hi I follow all the steps, all the model can get the right answer using bazel, but I still get this error when I run on Android.
Inference exception: java.lang.IllegalArgumentException: computed output size would be negative
[[Node: pool_3 = AvgPoolT=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"]]

help !

@AlvarezAriel
Copy link

I can reproduce the issue from @lxtGH

java.lang.IllegalArgumentException: computed output size would be negative
[[Node: pool_3 = AvgPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](mixed_10/join)]]

@jakublipinski
Copy link
Contributor

I fixed @lxtGH & @AlvarezAriel problem by running:
/tensorflow/bazel-bin/tensorflow/tools/quantization/quantize_graph --input=YOUR_STRIPPED_MODEL.pb --output_node_names=final_result --output=quantized_stripped_model.pb --mode=weights

@foslabs
Copy link

foslabs commented Jul 28, 2017

Tried the quantize_graphy command by @jakublipinski to regenerate pb file. It doesn't solve 'negative output' issue in my case.
I followed all instructions above. Everything works great till the log shows "...Running model failed: Invalid argument: computed output size would be negative...". Does the training picture size matter here?

@tikamsingh
Copy link

I have changes All the file name and Size , but still getting below error.

Source Code URL:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios/camera

Changes Code : const int wanted_input_width = 229;
const int wanted_input_height = 229;
const int wanted_input_channels = 3;
const float input_mean = 128.0f;
const float input_std = 128.0f;
const std::string input_layer_name = "Mul";
const std::string output_layer_name = "final_result";

computed output size would be negative
[[Node: pool_3 = AvgPoolT=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"]]

Please suggest what we are doing wrong .
In build pb and txt are working fine (imagenet_comp_graph_label_strings.txt, tensorflow_inception_graph.pb)

but created new pb and .txt is not working(rounded_graph.pb and retrained_labels.txt).
Note: I also rename the pb and txt file ,

@deepaksuresh
Copy link

@petewarden I used tf.image.decode_jpeg to read in images while training my model. I want to deploy my model on android. Since this decode_jpeg is not available on android, is there an alternate solution using opencv or other java libraries. The pixel values are different when I read in the input image using decode_jpeg compared to opencv. This results in the logits being different for the same image. How can I have the same behaviour on android?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests