Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: Can't convert Operation 'MutableHashTable' to Tensor #24439

Open
chunyang-wen opened this issue Dec 19, 2018 · 8 comments
Open

TypeError: Can't convert Operation 'MutableHashTable' to Tensor #24439

chunyang-wen opened this issue Dec 19, 2018 · 8 comments
Assignees
Labels
comp:runtime c++ runtime, performance issues (cpu) stat:awaiting tensorflower Status - Awaiting response from tensorflower

Comments

@chunyang-wen
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): No
  • TensorFlow version (use command below): 1.8.0
  • Python version: 3.6.3
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

Describe the current behavior

meta graph exported by export_meta_graph cannot be imported by import_meta_graph

Describe the expected behavior

export_meta_graph and import_meta_graph works.

Code to reproduce the issue

import tensorflow as tf
from tensorflow.contrib.lookup.lookup_ops import MutableHashTable
from tensorflow.contrib.lookup.lookup_ops import MutableDenseHashTable

export_dir = 'minimal_saved_model'
#builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.Session(graph=tf.Graph()) as sess:

    #table = MutableDenseHashTable(key_dtype=tf.int64, \
    # value_dtype=tf.int64, default_value=-1, empty_key=0)
    table = MutableHashTable(key_dtype=tf.int64, value_dtype=tf.int64, default_value=-1)
    aa = tf.get_variable('xxx', shape=(3,))

    meta = tf.train.export_meta_graph()
    saver = tf.train.Saver()


with tf.Session(graph=tf.Graph()) as sess1:
    tf.train.import_meta_graph(meta)

Other info / logs
related to #11888 . It seems that SaveableObject is not recreated correctly.

stack traces:

  File "minimal_case.py", line 22, in <module>
    tf.train.import_meta_graph(meta)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1970, in import_meta_graph
    return Saver()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1338, in __init__
    self.build()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1347, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1384, in _build
    build_save=build_save, build_restore=build_restore)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 813, in _build_internal
    saveables = self._ValidateAndSliceInputs(names_to_saveables)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 661, in _ValidateAndSliceInputs
    names_to_saveables = BaseSaverBuilder.OpListToDict(names_to_saveables)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 629, in OpListToDict
    var = ops.internal_convert_to_tensor(var, as_ref=True)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1104, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 6130, in _operation_conversion_error
    name, as_ref))
TypeError: Can't convert Operation 'MutableHashTable' to Tensor (target dtype=None, name=None, as_ref=True)
@ymodak ymodak added the contrib Anything that comes under contrib directory label Dec 19, 2018
@ymodak ymodak added comp:runtime c++ runtime, performance issues (cpu) and removed contrib Anything that comes under contrib directory labels Feb 15, 2019
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Feb 15, 2019
@bhack
Copy link
Contributor

bhack commented May 17, 2021

I think we could close this. Also we don't support TF 1.x anymore.

@kkzheng
Copy link

kkzheng commented May 19, 2021

Hello, i got the same problem. Did you have a solution?

@chunyang-wen
Copy link
Author

Hello, i got the same problem. Did you have a solution?

@kkzheng Not yet. But I think what you can do it to replace import_meta_graph call by yourself. I encounter the problem when I try to import and export again in order to manipulate the graph. Finally, I gave it up.

@mohantym
Copy link
Contributor

I replicated this problem using compatibility mode in 2.8.

@bhack
Copy link
Contributor

bhack commented Mar 11, 2022

The problem is that we don't have an import test for MutableHashTable but only for StaticHashTable:

def testImportedHashTable(self, is_anonymous):
g = ops.Graph()
with g.as_default():
t = lookup_ops.StaticHashTable(
lookup_ops.KeyValueTensorInitializer(["a"], [1]),
2)
init_op = t._init_op
op = t.lookup(ops.convert_to_tensor(["a"]))
meta_graph = saver.export_meta_graph()
def f():
saver.import_meta_graph(meta_graph)
return ops.get_default_graph().get_tensor_by_name(op.name)
wrapped = wrap_function.wrap_function(f, [])
pruned_init_fn = wrapped.prune(
(), [wrapped.graph.get_operation_by_name(init_op.name)])
self.evaluate(pruned_init_fn())
self.assertAllEqual([1], wrapped())

@bhack
Copy link
Contributor

bhack commented Mar 11, 2022

Now it is covered by an expected failing test #55200 that we could merge.

We could investigate later how to fix it and collect some hints from the internal team on what kind of PR they want.

@bhack
Copy link
Contributor

bhack commented Apr 8, 2022

This is now covered by an expected failing test.
To contribute this feature with a PR a contributor will need to write the code and pass this test removing the @unittest.expectedFailure decorator.

# TODO(https://github.com/tensorflow/tensorflow/issues/24439): remove exepectedFailure when fixed
@unittest.expectedFailure
@test_util.run_v2_only
def testImportedHashTable(self, is_anonymous):
g = ops.Graph()
with g.as_default():
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery", "tarkus"])
values = constant_op.constant([0, 1, 2, 3], dtypes.int64)
table = lookup_ops.MutableHashTable(
dtypes.string,
dtypes.int64,
default_val,
experimental_is_anonymous=is_anonymous)
self.evaluate(table.insert(keys, values))
op = table.lookup(constant_op.constant(["brain", "salad", "tank"]))
meta_graph = saver.export_meta_graph()
def f():
saver.import_meta_graph(meta_graph)
return ops.get_default_graph().get_tensor_by_name(op.name)
wrapped = wrap_function.wrap_function(f, [])
self.assertAllEqual([0, 1, -1], wrapped())

@mihaimaruseac @theadactyl @yarri-oss Do you think that we could use a special label other the eventually stat:contributions welcome when we have a ticket covered by an expected failing test like e.g stat:test_covered?
Cause I think It could be really useful for two main goals:

  • to set a potential preference on contributing a PR cause we have already a specific test that need to pass.
  • for the @tensorflow/dev-support to not periodically go to manually check an often not "minimized" gist on Colab against nightly and new TF releases to verify that the bug/FR wasn't "silently" solved.

@bhack
Copy link
Contributor

bhack commented Dec 29, 2022

Can we introduce this specific label for test covered contributions?
@MichaelHudgins @rishikasinha-tf @learning-to-play

/cc @mihaimaruseac

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:runtime c++ runtime, performance issues (cpu) stat:awaiting tensorflower Status - Awaiting response from tensorflower
Projects
None yet
Development

No branches or pull requests

6 participants