Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Android-Tensorflow model loading issue with SavedModelBundle.load() #12750

Closed
manishvatsa opened this issue Sep 1, 2017 · 6 comments
Closed
Assignees

Comments

@manishvatsa
Copy link

Loading the model in Android give below error:

FATAL EXCEPTION: main
Process: tensorflow.lgsi.com.posapplication, PID: 516
java.lang.RuntimeException: Unable to start activity ComponentInfo{tensorflow.lgsi.com.posapplication/tensorflow.lgsi.com.posapplication.MainActivity}: java.lang.UnsupportedOperationException: Loading a SavedModel is not supported in Android. File a bug at https://github.com/tensorflow/tensorflow/issues if this feature is important to you
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2727)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2788)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1504)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6248)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:872)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:762)
Caused by: java.lang.UnsupportedOperationException: Loading a SavedModel is not supported in Android. File a bug at https://github.com/tensorflow/tensorflow/issues if this feature is important to you
at org.tensorflow.SavedModelBundle.load(Native Method)
at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:38)
at tensorflow.com.posapplication.tagger.PosTagger.(PosTagger.java:23)
at tensorflow.lgsi.com.posapplication.tagger.PosTagger.getInsPosTagger(PosTagger.java:30)
at tensorflow.com.posapplication.MainActivity.onCreate(MainActivity.java:56)
at android.app.Activity.performCreate(Activity.java:6757)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1119)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2680)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2788) 
at android.app.ActivityThread.-wrap12(ActivityThread.java) 
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1504) 
at android.os.Handler.dispatchMessage(Handler.java:102) 
at android.os.Looper.loop(Looper.java:154) 
at android.app.ActivityThread.main(ActivityThread.java:6248) 
at java.lang.reflect.Method.invoke(Native Method) 
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:872) 
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:762) 
I/art: Starting

Model is saved in Python by using below API:
builder = tf.saved_model.builder.SavedModelBuilder(r'./tmp/model')
builder.add_meta_graph_and_variables(session, [tf.saved_model.tag_constants.SERVING])
builder.save(True)

Model is loading in Andoid using below api:
inferenceInterface = new TensorFlowInferenceInterface(context.getAssets(), MODEL_FILE);

@sandys
Copy link

sandys commented Sep 10, 2017

hey guys,
im facing the same issue. Is savedmodel the recommended way to export from tensorflow ? OTOH, I do see a commit about adding this support to android.

We are seeing too many options here:

  1. freezing - session_t->Create() Failed using tensorflow exported model on IOS #12319
  2. savedmodelbuilder - Android-Tensorflow model loading issue with SavedModelBundle.load() #12750 (which doesnt look to load on android)
  3. convert_variables_to_constants - How to save a graph models#38 (not sure if this saves with all the values)
  4. tf.train.export_meta_graph - https://www.tensorflow.org/api_guides/python/meta_graph (is this the same as freezing) ?
  5. optimize_for_inference - Freezing Graph Issue thtrieu/darkflow#286 . However, they wrote their own save function and graph function
  6. https://www.tensorflow.org/performance/xla/tfcompile
  7. Go directly to Keras - http://blog.stratospark.com/creating-a-deep-learning-ios-app-with-keras-and-tensorflow.html

Pretty puzzled on what is the production-ready way to do this.

@tensorflowbutler
Copy link
Member

It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly.

1 similar comment
@tensorflowbutler
Copy link
Member

It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly.

@tensorflowbutler
Copy link
Member

Nagging Assignee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@petewarden
Copy link
Contributor

The recommended way to load a model is through the freeze_graph process. There's some more detail here: https://www.tensorflow.org/mobile/prepare_models

@sandys
Copy link

sandys commented Jan 31, 2018

@petewarden thanks for the detailed note on the page. Could you add how to do this when using Google's own Cloud ML for training ? because the bazel thing is extremly tricky on CloudML.

Additionally, it would help to have this example on https://github.com/GoogleCloudPlatform/cloudml-samples
thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants