Skip to content

Google Cloud Shell: Turn images stored in buckets into tfrecords: Several errors, tf.placeholder() is not compatible with eager execution. #820

@dfvr1994

Description

@dfvr1994

Hi,

I'm following this tutorial
https://cloud.google.com/ai-platform/training/docs/algorithms/object-detection?authuser=2#overview

I'm trying to run this script from Google Cloud Shell in order to turn images stored in buckets into tfrecords

https://github.com/tensorflow/tpu/blob/master/tools/datasets/jpeg_to_tf_record.py

First I would have to run it on python3 without the -m switch, otherwise it doesn't run

python3 jpeg_to_tf_record.py        --train_csv gs://test2_ai_onj_detector_sss/train_new.csv        --validation_csv gs://test2_ai_onj_detector_sss/validate_new.csv        --labels_file /home/my_username/labels.txt        --project_id $PROJECT_ID        --output_dir gs://$BUCKET_NAME/flowers_as_tf_record

Is then shows the following error:

  File "jpeg_to_tf_record.py", line 156, in convert_to_example
TypeError: a bytes-like object is required, not 'str' [while running 'train_convert']

I would have to change line 156 from filename, label = csvline.encode('utf8', 'ignore').split(',') to filename, label = csvline.split(',')

Then the following error is shown:

    raise RuntimeError("tf.placeholder() is not compatible with "
RuntimeError: tf.placeholder() is not compatible with eager execution. [while running 'train_convert']

Therefore I would comment import tensorflow.compat.v1 as tf and add this:

import tensorflow as tf
tf.compat.v1.disable_eager_execution()

After waiting again 5 minutes for the script to run the following error is shown:

RecursionError: maximum recursion depth exceeded while calling a Python object

Am I doing things right? Isn't the script supposed to be run without any major modification?

Do I have to set an encoding option when saving the .csv files from Excel? Do I have to set this sys.setrecursionlimit({A REALLY LARGE NUMBER}) ???

Here are the 2 csv files csv.zip

Here is the labels file labels.txt

I'm also getting this warnings: Do I have to upgrade the python SDK under docker?

WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
WARNING:apache_beam.options.pipeline_options:Discarding invalid overrides: {'teardown_policy': 'TEARDOWN_ALWAYS'}
WARNING:apache_beam.options.pipeline_options:Discarding invalid overrides: {'teardown_policy': 'TEARDOWN_ALWAYS'}

Complete last error:

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Removing gs://test2_ai_onj_detector_sss/flowers_as_tf_record/tmp/preprocess-images-200807-181957.1596824408.349132/#1596824418714741...
Removing gs://test2_ai_onj_detector_sss/flowers_as_tf_record/tmp/staging/preprocess-images-200807-181957.1596824408.349132/apache_beam-2.23.0-cp37-cp37m-manylinux1_x86_64.whl#1596824410608308...
Removing gs://test2_ai_onj_detector_sss/flowers_as_tf_record/tmp/staging/preprocess-images-200807-181957.1596824408.349132/dataflow_python_sdk.tar#1596824409563714...
Removing gs://test2_ai_onj_detector_sss/flowers_as_tf_record/tmp/staging/preprocess-images-200807-181957.1596824408.349132/pickled_main_session#1596824409113998...
Removing gs://test2_ai_onj_detector_sss/flowers_as_tf_record/tmp/staging/preprocess-images-200807-181957.1596824408.349132/pipeline.pb#1596824409021480...
/ [5/5 objects] 100% Done
Operation completed over 5 objects.
WARNING:tensorflow:From jpeg_to_tf_record.py:230: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
Read in 2 labels, from Lemon_diseased to Lemon_healthy
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
WARNING:apache_beam.options.pipeline_options:Discarding invalid overrides: {'teardown_policy': 'TEARDOWN_ALWAYS'}
WARNING:apache_beam.options.pipeline_options:Discarding invalid overrides: {'teardown_policy': 'TEARDOWN_ALWAYS'}
Traceback (most recent call last):
  File "jpeg_to_tf_record.py", line 259, in <module>
    os.path.join(OUTPUT_DIR, step)))
  File "/home/caropoveapt/.local/lib/python3.7/site-packages/apache_beam/pipeline.py", line 555, in __exit__
    self.run().wait_until_finish()
  File "/home/caropoveapt/.local/lib/python3.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1629, in wait_until_finish
    self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/module_wrapper.py", line 166, in __getattribute__
    return attr_map[name]
KeyError: '_tfmw_wrapped_module'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 760, in run
    self._load_main_session(self.local_staging_directory)
  File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 501, in _load_main_session
    pickler.load_session(session_file)
  File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 311, in load_session
    return dill.load_session(file_path)
  File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 368, in load_session
    module = unpickler.load()
  File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
    obj = StockUnpickler.load(self)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/module_wrapper.py", line 192, in __getattr__
    attr = getattr(self._tfmw_wrapped_module, name)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/module_wrapper.py", line 192, in __getattr__
    attr = getattr(self._tfmw_wrapped_module, name)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/module_wrapper.py", line 192, in __getattr__
    attr = getattr(self._tfmw_wrapped_module, name)
  [Previous line repeated 490 more times]
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/module_wrapper.py", line 176, in __getattribute__
    attr = super(TFModuleWrapper, self).__getattribute__(name)
RecursionError: maximum recursion depth exceeded while calling a Python object

I would really appreciate your help

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions