Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARNING:tensorflow:AutoGraph could not transform and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. #37144

Closed
nishuang83 opened this issue Feb 27, 2020 · 41 comments
Assignees
Labels
comp:autograph Autograph related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.1 for tracking issues in 2.1 release type:bug Bug

Comments

@nishuang83
Copy link

System information

  • Have I written custom code: Yes
  • OS Platform and Distribution : Windows 10
  • TensorFlow installed from (source or binary): Anaconda
  • TensorFlow version (use command below): 2.1.0
  • Python version: 3.7.4

Describe the current behavior
I'm using anaconda tensorflow, Spyder. When run my custom layers, it shows warning as below:

WARNING:tensorflow:AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 
WARNING: AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.

Describe the method I tried
I have already tried the solution which others provided: pip install gast==0.2.2
I also re-installed all of the softwares (anaconda, tensorflow, spyder).
However, these methods doesn't solve my problem.
Is there any other solution?

Standalone code to reproduce the issue

class GroupSoftmax(layers.Layer):
    def __init__(self, axis=-1, **kwargs):
        super(GroupSoftmax, self).__init__(**kwargs)
        self.supports_masking = True
        self.axis = axis

    def call(self, inputs):
        return tf.divide(inputs, tf.reduce_sum(inputs, axis=self.axis))

    def get_config(self):
        config = {'axis': self.axis}
        base_config = super(GroupSoftmax, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))
    
    @tf_utils.shape_type_conversion
    def compute_output_shape(self, input_shape):
        return input_shape

'''
-----------------network of g-----------------
'''
gModel = tf.keras.Sequential([
# 添加一个有Nodes个神经元的全连接层,“input_shape”为该层接受的输入数据的维度,“activation”指定该层所用的激活函数
layers.Dense(Nodes, activation='sigmoid', input_shape=(60,), use_bias = False),#封装数据应该为(3000,10,6)
# 添加第二个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第3个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第4个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第5个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第6个网络层,改变节点数目
layers.Dense(66, activation='sigmoid', use_bias = False),
# 添加第7个网络层,改变shape
layers.Reshape((11, 6)),
# 添加output网络层,分组softmax
#layers.Dense(6, activation=layers.Softmax(axis=0),input_shape=(11,6), use_bias = False), # [11,6]
#layers.Softmax(axis=0)
GroupSoftmax(axis=0)
])

gModel.summary()   

Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.

@nishuang83 nishuang83 added the type:bug Bug label Feb 27, 2020
@ravikyram ravikyram added the TF 2.1 for tracking issues in 2.1 release label Feb 28, 2020
@ravikyram
Copy link
Contributor

@NSssss

I have tried in colab with TF 2.1.0 and i am not seeing any issue. Please, find the gist here.I have made few assumptions in code while executing. If you feel there is an issue please do update in attached colab and help me to reproduce the issue. It helps me in localizing the issue faster. Thanks!

@ravikyram ravikyram added the stat:awaiting response Status - Awaiting response from author label Feb 28, 2020
@nishuang83
Copy link
Author

@NSssss

I have tried in colab with TF 2.1.0 and i am not seeing any issue. Please, find the gist here.I have made few assumptions in code while executing. If you feel there is an issue please do update in attached colab and help me to reproduce the issue. It helps me in localizing the issue faster. Thanks!

Thanks. there's no problem with colab. But there's always a warning when using spyder in anaconda tensorflow. It seems there's no influence on the results, but I'm not sure if it will influence the speed.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Mar 1, 2020
@ravikyram ravikyram added the comp:keras Keras related issues label Mar 3, 2020
@ravikyram ravikyram assigned ymodak and unassigned ravikyram Mar 3, 2020
@ymodak ymodak added comp:autograph Autograph related issues and removed comp:keras Keras related issues labels Mar 17, 2020
@ymodak
Copy link
Contributor

ymodak commented Mar 17, 2020

You can safely ignore the warning log as its intended to debug logging in AutoGraph issues. Thanks !

@ymodak ymodak added the stat:awaiting response Status - Awaiting response from author label Mar 17, 2020
@nishuang83
Copy link
Author

It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue?

Yes, this problem still exists.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Apr 4, 2020
@mdanatg
Copy link

mdanatg commented Apr 21, 2020

@NSssss

The warning will not hurt performance, but I'm curious as to what the cause might be. Can you add this line just before importing tensorflow: tf.autograph.set_verbosity(3, True)? That will print additional details with the warning message that we could use to investigate.

@ravikyram ravikyram added the stat:awaiting response Status - Awaiting response from author label Apr 22, 2020
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Apr 29, 2020
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@KaticaR
Copy link

KaticaR commented May 21, 2020

Hello,

I have used tf.autograph.set_verbosity(3, True) and here is what I have got:

INFO:tensorflow:Converted call: <function TensorLikeDataAdapter.__init__.<locals>.permutation at 0x00000225B41F2D38>
    args: (<tf.Tensor 'args_0:0' shape=() dtype=int64>,)
    kwargs: {}

Converted call: <function TensorLikeDataAdapter.__init__.<locals>.permutation at 0x00000225B41F2D38>
    args: (<tf.Tensor 'args_0:0' shape=() dtype=int64>,)
    kwargs: {}

INFO:tensorflow:Whitelisted: <function TensorLikeDataAdapter.__init__.<locals>.permutation at 0x00000225B41F2D38>: DoNotConvert rule for tensorflow
Whitelisted: <function TensorLikeDataAdapter.__init__.<locals>.permutation at 0x00000225B41F2D38>: DoNotConvert rule for tensorflow
INFO:tensorflow:Converted call: <function TensorLikeDataAdapter.__init__.<locals>.slice_batch_indices at 0x00000225B41F2438>
    args: (<tf.Tensor 'args_0:0' shape=(100,) dtype=int64>,)
    kwargs: {}

Converted call: <function TensorLikeDataAdapter.__init__.<locals>.slice_batch_indices at 0x00000225B41F2438>
    args: (<tf.Tensor 'args_0:0' shape=(100,) dtype=int64>,)
    kwargs: {}

INFO:tensorflow:Whitelisted: <function TensorLikeDataAdapter.__init__.<locals>.slice_batch_indices at 0x00000225B41F2438>: DoNotConvert rule for tensorflow
Whitelisted: <function TensorLikeDataAdapter.__init__.<locals>.slice_batch_indices at 0x00000225B41F2438>: DoNotConvert rule for tensorflow
INFO:tensorflow:Converted call: <function TensorLikeDataAdapter.slice_inputs.<locals>.grab_batch at 0x000002258096BE58>
    args: (<tf.Tensor 'args_0:0' shape=(None,) dtype=int64>, (<tf.Tensor 'args_1:0' shape=(100, 35, 1) dtype=float32>,))
    kwargs: {}

Converted call: <function TensorLikeDataAdapter.slice_inputs.<locals>.grab_batch at 0x000002258096BE58>
    args: (<tf.Tensor 'args_0:0' shape=(None,) dtype=int64>, (<tf.Tensor 'args_1:0' shape=(100, 35, 1) dtype=float32>,))
    kwargs: {}

INFO:tensorflow:Whitelisted: <function TensorLikeDataAdapter.slice_inputs.<locals>.grab_batch at 0x000002258096BE58>: DoNotConvert rule for tensorflow
Whitelisted: <function TensorLikeDataAdapter.slice_inputs.<locals>.grab_batch at 0x000002258096BE58>: DoNotConvert rule for tensorflow
INFO:tensorflow:Converted call: <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8>
    args: (<tensorflow.python.data.ops.iterator_ops.OwnedIterator object at 0x000002264AF74A48>,)
    kwargs: {}

Converted call: <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8>
    args: (<tensorflow.python.data.ops.iterator_ops.OwnedIterator object at 0x000002264AF74A48>,)
    kwargs: {}

INFO:tensorflow:Cache hit for entity <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8> subkey (<tensorflow.python.autograph.core.converter.ConversionOptions object at 0x000002264AF74FC8>, frozenset({'self'})): _ConvertedEntityFactoryInfo(tf__predict_function in tmpmvnq3xyi)
Cache hit for entity <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8> subkey (<tensorflow.python.autograph.core.converter.ConversionOptions object at 0x000002264AF74FC8>, frozenset({'self'})): _ConvertedEntityFactoryInfo(tf__predict_function in tmpmvnq3xyi)
INFO:tensorflow:Error transforming entity <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8>
Traceback (most recent call last):
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 538, in converted_call
    converted_f = conversion.convert(target_entity, program_ctx)
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 362, in convert
    return _instantiate(entity, converted_entity_info, free_nonglobal_var_names)
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 300, in _instantiate
    factory = converted_entity_info.get_factory()
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 94, in get_factory
    assert self.module_name in sys.modules
AssertionError
Error transforming entity <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8>
WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x00000225B4276EE8> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:tensorflow:Converted call: <bound method Model.predict_step of <tensorflow.python.keras.engine.sequential.Sequential object at 0x000002264C0EBC88>>
    args: ((<tf.Tensor 'IteratorGetNext:0' shape=(None, 35, 1) dtype=float32>,),)
    kwargs: {}

Converted call: <bound method Model.predict_step of <tensorflow.python.keras.engine.sequential.Sequential object at 0x000002264C0EBC88>>
    args: ((<tf.Tensor 'IteratorGetNext:0' shape=(None, 35, 1) dtype=float32>,),)
    kwargs: {}

INFO:tensorflow:Whitelisted <bound method Model.predict_step of <tensorflow.python.keras.engine.sequential.Sequential object at 0x000002264C0EBC88>>: from cache
Whitelisted <bound method Model.predict_step of <tensorflow.python.keras.engine.sequential.Sequential object at 0x000002264C0EBC88>>: from cache
Traceback (most recent call last):
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 538, in converted_call
    converted_f = conversion.convert(target_entity, program_ctx)
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 362, in convert
    return _instantiate(entity, converted_entity_info, free_nonglobal_var_names)
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 300, in _instantiate
    factory = converted_entity_info.get_factory()
  File "C:\Users\katica.ristic\Anaconda3\lib\site-packages\tensorflow\python\autograph\impl\conversion.py", line 94, in get_factory
    assert self.module_name in sys.modules
AssertionError

@KaticaR
Copy link

KaticaR commented May 21, 2020

Hope this could help solve the issue.
I am using Windows 10, Anaconda, TensorFlow version 2.2.0
Python version 3.7

@mdanatg
Copy link

mdanatg commented May 21, 2020

@KaticaR thank you for the logs, they seem to indicate an incompatibility with the toolchain, though it's unclear which piece. At any rate, the faulting piece was refactored recently. If you have the chance, please retry with tf-nightly.

@neomatrix369
Copy link

Hi I got this as well, and as the message says I set the env variable and recorded the stack trace:

WARNING: AutoGraph could not transform <function initialize_tpu_system.<locals>._tpu_init_fn at 0x7fdbda21fcb0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: No module named 'tensorflow_core.keras'
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-9-2eec46e4b091>", line 13, in <module>
    tf.tpu.experimental.initialize_tpu_system(tpu)
  File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/tpu/tpu_strategy_util.py", line 103, in initialize_tpu_system
  File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 942, in numpy
  File "/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 910, in _numpy
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef expected inputs 'string' do not match 0 inputs specified; Op<name=_Send; signature=tensor:T -> ; attr=T:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>; NodeDef: {{node _Send}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2044, in showtraceback
    stb = value._render_traceback_()
AttributeError: 'InvalidArgumentError' object has no attribute '_render_traceback_'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/IPython/core/ultratb.py", line 1148, in get_records
    return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
  File "/opt/conda/lib/python3.7/site-packages/IPython/core/ultratb.py", line 316, in wrapped
    return f(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/IPython/core/ultratb.py", line 350, in _fixed_getinnerframes
    records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
  File "/opt/conda/lib/python3.7/inspect.py", line 1502, in getinnerframes
    frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)
  File "/opt/conda/lib/python3.7/inspect.py", line 1460, in getframeinfo
    filename = getsourcefile(frame) or getfile(frame)
  File "/opt/conda/lib/python3.7/inspect.py", line 696, in getsourcefile
    if getattr(getmodule(object, filename), '__loader__', None) is not None:
  File "/opt/conda/lib/python3.7/inspect.py", line 733, in getmodule
    if ismodule(module) and hasattr(module, '__file__'):
  File "/opt/conda/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
    from ._api.v2 import data
  File "/opt/conda/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
    from ._api.v2 import audio
  File "/opt/conda/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tensorflow_core.keras'

I was running tensorflow in a Kaggle kernel.

@KaticaR
Copy link

KaticaR commented Jun 7, 2020

When we downgraded the versions of keras and tensorflow to match, then the error disappointed.

@mdanatg
Copy link

mdanatg commented Jul 9, 2020

Sorry, for 2.2 please use tf.autograph.experimental.do_not_convert.

@yaohua32
Copy link

@mdanatg Thanks, the problem is fixed with this solution.

@spawnaga
Copy link

@mobassir94 try the following to avoid the warning:

ds_train     = ds_train.map(lambda img, label: (img, tuple([label])))

with:

unpack_label = lambda img, label: (img, tuple([label]))
unpack_label = tf.autograph.do_not_convert(unpack_label)  # Runtime not compatible
ds_train = ds_train.map(unpack_label)

The slowness problem is unrelated, though.

sorry, would you please put your fix in order. what ds_train refers to?

@mdanatg
Copy link

mdanatg commented Jul 13, 2020

@spawnaga this workaround refers specifically to @mobassir94's code. It is not a general fix. Also note that the issue is fixed in tf-nightly.

@mdanatg
Copy link

mdanatg commented Jul 13, 2020

@fchollet might have better advice on active user groups who could answer @mobassir94's questions

@shubhamgajbhiye1994
Copy link

@mobassir94 try the following to avoid the warning:

ds_train     = ds_train.map(lambda img, label: (img, tuple([label])))

with:

unpack_label = lambda img, label: (img, tuple([label]))
unpack_label = tf.autograph.do_not_convert(unpack_label)  # Runtime not compatible
ds_train = ds_train.map(unpack_label)

The slowness problem is unrelated, though.

sorry, would you please put your fix in order. what ds_train refers to?

@mobassir94 Can you please detail it about below code (as i am getting same error in tensorflow api )
does it create tfrecord ?
unpack_label = lambda img, label: (img, tuple([label]))
unpack_label = tf.autograph.do_not_convert(unpack_label) # Runtime not compatible
ds_train = ds_train.map(unpack_label)

@mdanatg
Copy link

mdanatg commented Jan 23, 2021

@shubhamgajbhiye1994 can you create a small snippet that we can run separately to reproduce the error you're seeing? That would help us give you better advice.

@jda5
Copy link

jda5 commented Aug 15, 2021

I have this same issue. My error logs:

WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x13ffac310> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: unsupported operand type(s) for -: 'NoneType' and 'int'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2021-08-15 10:38:47.649790: I tensorflow/compiler/tf2mlcompute/kernels/mlc_subgraph_op.cc:326] Compute: Failed in processing TensorFlow graph sequential/MLCSubgraphOp_2_0 with frame_id = 0 and iter_id = 0 with error: Internal: ExecuteMLCInferenceGraph: Failed to execute MLC inference graph. (error will be reported 5 times unless TF_MLC_LOGGING=1).
2021-08-15 10:38:47.652117: F tensorflow/core/framework/op_kernel.cc:983] Check failed: outputs_[index].tensor == nullptr (0x13fd05cb0 vs. nullptr)
zsh: abort      python test.py

Code to reproduce:

from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.losses import SparseCategoricalCrossentropy

(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()

X_train = X_train / 255
X_test = X_test / 255

model = Sequential([
    Flatten(input_shape=X_train.shape[1:]),
    Dense(30, activation='relu'),
    Dense(30, activation='relu'),
    Dense(10, activation='softmax')
    ])

model.compile(optimizer='adam',
              loss=SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(X_train, y_train, epochs=30)

y_pred = model.predict(X_test)  # <--- Error occurs here
print(np.argmax(predictions[0]))

I am on a MacBook Air (M1, 2020) running macOS Big Sur (11.4).

@KaticaR
Copy link

KaticaR commented Aug 16, 2021 via email

@bayeslearner
Copy link

Would you be more specific? What right versions? How to set them.

@bayeslearner
Copy link

I have this same issue. My error logs:

WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x13ffac310> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: unsupported operand type(s) for -: 'NoneType' and 'int'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2021-08-15 10:38:47.649790: I tensorflow/compiler/tf2mlcompute/kernels/mlc_subgraph_op.cc:326] Compute: Failed in processing TensorFlow graph sequential/MLCSubgraphOp_2_0 with frame_id = 0 and iter_id = 0 with error: Internal: ExecuteMLCInferenceGraph: Failed to execute MLC inference graph. (error will be reported 5 times unless TF_MLC_LOGGING=1).
2021-08-15 10:38:47.652117: F tensorflow/core/framework/op_kernel.cc:983] Check failed: outputs_[index].tensor == nullptr (0x13fd05cb0 vs. nullptr)
zsh: abort      python test.py

Code to reproduce:

from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.losses import SparseCategoricalCrossentropy

(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()

X_train = X_train / 255
X_test = X_test / 255

model = Sequential([
    Flatten(input_shape=X_train.shape[1:]),
    Dense(30, activation='relu'),
    Dense(30, activation='relu'),
    Dense(10, activation='softmax')
    ])

model.compile(optimizer='adam',
              loss=SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(X_train, y_train, epochs=30)

y_pred = model.predict(X_test)  # <--- Error occurs here
print(np.argmax(predictions[0]))

I am on a MacBook Air (M1, 2020) running macOS Big Sur (11.4).
same error as well. Are you using the older version of tensorflow or the apple's metal thing.
I tried both:

  1. the above error happens with the older version and the now archived apple repositoryl
  2. The more official apple package requires mac os >=12.0

@mdanatg
Copy link

mdanatg commented Aug 17, 2021

Could you set the verbosity to 3 and include the log messages, along with the pip package versions that you used? That would help reproduce the issue. I tried the snippet in colab on the stable version, but couldn't reproduce it.

@mdanatg
Copy link

mdanatg commented Aug 17, 2021

It would also be a good idea to file a new issue - this warning is very generic and could signal many different issues.

@liqinglin54951
Copy link

liqinglin54951 commented Aug 21, 2021

image
all my installed packages : https://blog.csdn.net/Linli522362242/article/details/108037567
pip install gast==0.2.2
image

pip install tensorflow-serving-api==2.1.0
image

@mdanatg
Copy link

mdanatg commented Aug 21, 2021

@liqinglin54951 that looks like an incompatible version of gast. Could you try 0.3.0?

@falahfakhri-Iraq
Copy link

I have python 3.9.6, windows 10, tensorflow 2.6, I got the following error,

Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: Unable to locate the source code of <function f1_m at 0x0000024D9B260DC0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

the verbos has been set to 10,

this is the final report

OSError: Unable to create file (unable to open file: name = 'D:/TESTS/building_detection/Data/Checkpoints\weights.01-0.93.hdf5', errno = 24, error message = 'Too many open files', flags = 13, o_flags = 302)

same code has been used with other data, I didn't get this message.

@ddnovikov
Copy link

ddnovikov commented Oct 27, 2021

Hi everyone!

I've had the same warnings and suddenly found out that they are caused by jupyter's magic commands. It was %%time command in my case. Also, these commands are also causing bugs with TF not being able to retrieve the source code for some functions.

I didn't do a very thorough research, but as far as I understand, it happens due to magic commands wrapping the cells with code into some object or something. So when TensorFlow tries to get the source code with inspect module, it fails or at least has some difficulties, because the magic command incapsulates the whole cell code, thus making the contents of the cell not accessible to inspect.

Thus, avoid using jupyter's magic commands along with the tensorflow, they can cause this warning. And also the rule of thumb here will be to avoid any decorating structures over @tf.function-s.

@mdanatg
Copy link

mdanatg commented Oct 27, 2021

That's a useful find. I suspect that it you put all your code in functions in a separate cell, and only call those functions from the %%time ones, it should work?

@ddnovikov
Copy link

@mdanatg, I tried this and it doesn't seem to work, unfortunately. I finally did a little research and I found out that inspect.unwrap(my_func_wrapped_in_tf_function).__code__ returns <code object my_func_wrapped_in_tf_function at 0x7fd706258390, file "<timed exec>", line 36>. Actually I found this "<timed exec>" in ipython repo, but it's hard to get why it's hiding the actual source code. But I managed to find some kind of explanation here: ipython/ipython#11659 (comment)

We actually do not run things in global scope, but create an anonymous function and run this function. So in your example, Gloop is not really globlal.

I'm pretty sure starting your timeit with global Gloop should work around the issue. It's just quite hard.

It confirms my hypothesis, but doesn't provide any hacks for our case. Trying to set function name as global (in the hope that it would make its name really global) didn't work for me either.

@mdanatg
Copy link

mdanatg commented Nov 13, 2021

I see, that makes sense. This is in fact a limitation of Python itself - in order for inspect.getsource to work, one must load the function from a file on disk (basically, only imp.load_module wires source code properly); it's even a bit of mystery how Jupyter does manage to wire that source info, I suspect it does some surgery on Python's internal caches.

Anyhow, we ought to get this to work if any function that is wrapped by autograph is placed inside a Python file (and imported instead). In other words, no @tf.function call anywhere in the ipython code.

We're basically looking for the following call stack (conceptually):

ipython > timeit wrappers > module in a file > tf.function > autograph > rest of the code

Assuming timeit just calls external functions without doing anything else to them, things should work that way.

@steveepreston
Copy link

steveepreston commented Aug 19, 2024

still facing this nerve-wracking warning in tensorflow 2.16.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:autograph Autograph related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.1 for tracking issues in 2.1 release type:bug Bug
Projects
None yet
Development

No branches or pull requests