Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list' #8972

Closed
sreenupadidapu opened this issue Jul 27, 2020 · 9 comments
Assignees
Labels
models:research models that come under research directory stat:awaiting response Waiting on input from the contributor type:support

Comments

@sreenupadidapu
Copy link

while running the train.py in thetensorflow 2.2 version i am getting this error

@sreenupadidapu
Copy link
Author

versions i used :
python : 3.6
tensorflow : 2.2
numpy : 1.19

@ravikyram
Copy link

@sreenupadidapu

What is the top-level directory of the model you are using?

Please, let us know which pretrained model you are using and share related code .Thanks!

@ravikyram ravikyram self-assigned this Jul 27, 2020
@ravikyram ravikyram added stat:awaiting response Waiting on input from the contributor type:support labels Jul 27, 2020
@sreenupadidapu
Copy link
Author

i am using the faster_rcnn_resnet152_v1_800x1333_coco17_gpu-8 pretrained model

@sreenupadidapu
Copy link
Author

Copyright 2017 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

==============================================================================

r"""Training executable for detection models.

This executable is used to train DetectionModels. There are two ways of
configuring the training job:

  1. A single pipeline_pb2.TrainEvalPipelineConfig configuration file
    can be specified by --pipeline_config_path.

Example usage:
./train
--logtostderr
--train_dir=path/to/train_dir
--pipeline_config_path=pipeline_config.pbtxt

  1. Three configuration files can be provided: a model_pb2.DetectionModel
    configuration file to define what type of DetectionModel is being trained, an
    input_reader_pb2.InputReader file to specify what training data will be used and
    a train_pb2.TrainConfig file to configure training parameters.

Example usage:
./train
--logtostderr
--train_dir=path/to/train_dir
--model_config_path=model_config.pbtxt
--train_config_path=train_config.pbtxt
--input_config_path=train_input_config.pbtxt
"""

import functools
import json
import os
import tensorflow.compat.v1 as tf
from tensorflow.python.util.deprecation import deprecated

from object_detection.builders import dataset_builder
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.legacy import trainer
from object_detection.utils import config_util

tf.logging.set_verbosity(tf.logging.INFO)

flags = tf.app.flags
flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
flags.DEFINE_integer('task', 0, 'task id')
flags.DEFINE_integer('num_clones', 1, 'Number of clones to deploy per worker.')
flags.DEFINE_boolean('clone_on_cpu', False,
'Force clones to be deployed on CPU. Note that even if '
'set to False (allowing ops to run on gpu), some ops may '
'still be run on the CPU if they have no GPU kernel.')
flags.DEFINE_integer('worker_replicas', 1, 'Number of worker+trainer '
'replicas.')
flags.DEFINE_integer('ps_tasks', 0,
'Number of parameter server tasks. If None, does not use '
'a parameter server.')
flags.DEFINE_string('train_dir', '',
'Directory to save the checkpoints and training summaries.')

flags.DEFINE_string('pipeline_config_path', '',
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file. If provided, other configs are ignored')

flags.DEFINE_string('train_config_path', '',
'Path to a train_pb2.TrainConfig config file.')
flags.DEFINE_string('input_config_path', '',
'Path to an input_reader_pb2.InputReader config file.')
flags.DEFINE_string('model_config_path', '',
'Path to a model_pb2.DetectionModel config file.')

FLAGS = flags.FLAGS

@deprecated(None, 'Use object_detection/model_main.py.')
def main(_):
assert FLAGS.train_dir, 'train_dir is missing.'
if FLAGS.task == 0: tf.gfile.MakeDirs(FLAGS.train_dir)
if FLAGS.pipeline_config_path:
configs = config_util.get_configs_from_pipeline_file(
FLAGS.pipeline_config_path)
if FLAGS.task == 0:
tf.gfile.Copy(FLAGS.pipeline_config_path,
os.path.join(FLAGS.train_dir, 'pipeline.config'),
overwrite=True)
else:
configs = config_util.get_configs_from_multiple_files(
model_config_path=FLAGS.model_config_path,
train_config_path=FLAGS.train_config_path,
train_input_config_path=FLAGS.input_config_path)
if FLAGS.task == 0:
for name, config in [('model.config', FLAGS.model_config_path),
('train.config', FLAGS.train_config_path),
('input.config', FLAGS.input_config_path)]:
tf.gfile.Copy(config, os.path.join(FLAGS.train_dir, name),
overwrite=True)

model_config = configs['model']
train_config = configs['train_config']
input_config = configs['train_input_config']

model_fn = functools.partial(
model_builder.build,
model_config=model_config,
is_training=True)

def get_next(config):
return dataset_builder.make_initializable_iterator(
dataset_builder.build(config)).get_next()

create_input_dict_fn = functools.partial(get_next, input_config)

env = json.loads(os.environ.get('TF_CONFIG', '{}'))
cluster_data = env.get('cluster', None)
cluster = tf.train.ClusterSpec(cluster_data) if cluster_data else None
task_data = env.get('task', None) or {'type': 'master', 'index': 0}
task_info = type('TaskSpec', (object,), task_data)

Parameters for a single worker.

ps_tasks = 0
worker_replicas = 1
worker_job_name = 'lonely_worker'
task = 0
is_chief = True
master = ''

if cluster_data and 'worker' in cluster_data:
# Number of total worker replicas include "worker"s and the "master".
worker_replicas = len(cluster_data['worker']) + 1
if cluster_data and 'ps' in cluster_data:
ps_tasks = len(cluster_data['ps'])

if worker_replicas > 1 and ps_tasks < 1:
raise ValueError('At least 1 ps task is needed for distributed training.')

if worker_replicas >= 1 and ps_tasks > 0:
# Set up distributed training.
server = tf.train.Server(tf.train.ClusterSpec(cluster), protocol='grpc',
job_name=task_info.type,
task_index=task_info.index)
if task_info.type == 'ps':
server.join()
return

worker_job_name = '%s/task:%d' % (task_info.type, task_info.index)
task = task_info.index
is_chief = (task_info.type == 'master')
master = server.target

graph_rewriter_fn = None
if 'graph_rewriter_config' in configs:
graph_rewriter_fn = graph_rewriter_builder.build(
configs['graph_rewriter_config'], is_training=True)

trainer.train(
create_input_dict_fn,
model_fn,
train_config,
master,
task,
FLAGS.num_clones,
worker_replicas,
FLAGS.clone_on_cpu,
ps_tasks,
worker_job_name,
is_chief,
FLAGS.train_dir,
graph_hook_fn=graph_rewriter_fn)

if name == 'main':
tf.app.run()
i am getting the error while running this model

@sreenupadidapu
Copy link
Author

python3 train.py --logtostderr --train_dir=checkpoints/ --pipeline_config_path=checkpoints/faster_rcnn_resnet152_v1_800x1333_coco17_gpu-8.config
Traceback (most recent call last):
File "train.py", line 51, in
from object_detection.builders import dataset_builder
File "/home/spadidapu/models-mast/models-master/research/object_detection/builders/dataset_builder.py", line 32, in
from object_detection.builders import decoder_builder
File "/home/spadidapu/models-mast/models-master/research/object_detection/builders/decoder_builder.py", line 25, in
from object_detection.data_decoders import tf_example_decoder
File "/home/spadidapu/models-mast/models-master/research/object_detection/data_decoders/tf_example_decoder.py", line 37, in
from tensorflow.contrib import lookup as contrib_lookup
File "/home/spadidapu/.local/lib/python3.6/site-packages/tensorflow/contrib/init.py", line 31, in
from tensorflow.contrib import cloud
File "/home/spadidapu/.local/lib/python3.6/site-packages/tensorflow/contrib/cloud/init.py", line 24, in
from tensorflow.contrib.cloud.python.ops.bigquery_reader_ops import *
File "/home/spadidapu/.local/lib/python3.6/site-packages/tensorflow/contrib/cloud/python/ops/bigquery_reader_ops.py", line 21, in
from tensorflow.contrib.cloud.python.ops import gen_bigquery_reader_ops
File "/home/spadidapu/.local/lib/python3.6/site-packages/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py", line 369, in
_op_def_lib = _InitOpDefLibrary(b"\n\355\001\n\016BigQueryReader\032\024\n\rreader_handle\030\007\200\001\001"\027\n\tcontainer\022\006string\032\002\022\000"\031\n\013shared_name\022\006string\032\002\022\000"\024\n\nproject_id\022\006string"\024\n\ndataset_id\022\006string"\022\n\010table_id\022\006string"\027\n\007columns\022\014list(string)"\027\n\020timestamp_millis\022\003int"\034\n\016test_end_point\022\006string\032\002\022\000\210\001\001\n\331\001\n GenerateBigQueryReaderPartitions\032\016\n\npartitions\030\007"\024\n\nproject_id\022\006string"\024\n\ndataset_id\022\006string"\022\n\010table_id\022\006string"\027\n\007columns\022\014list(string)"\027\n\020timestamp_millis\022\003int"\025\n\016num_partitions\022\003int"\034\n\016test_end_point\022\006string\032\002\022\000")
File "/home/spadidapu/.local/lib/python3.6/site-packages/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py", line 277, in _InitOpDefLibrary
_op_def_registry.register_op_list(op_list)
AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list'

@sreenupadidapu
Copy link
Author

please help,
thanks in advance.

@ravikyram ravikyram added models:research models that come under research directory and removed stat:awaiting response Waiting on input from the contributor labels Jul 27, 2020
@ravikyram ravikyram assigned tombstone, jch1 and pkulzc and unassigned ravikyram Jul 27, 2020
@syiming
Copy link
Contributor

syiming commented Jul 28, 2020

Hi, the error is reported from a TensorFlow library instead of within Object API. Please check this issue tensorflow/tensorflow#34762.

@syiming syiming added the stat:awaiting response Waiting on input from the contributor label Jul 28, 2020
@syiming
Copy link
Contributor

syiming commented Aug 30, 2020

Closing this issue due to lack of activity. Please reopen this issue if you have further questions. Thanks!

@syiming syiming closed this as completed Aug 30, 2020
@marinagardella
Copy link

HELP!
I NEED SOMEBODY
HELP!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models:research models that come under research directory stat:awaiting response Waiting on input from the contributor type:support
Projects
None yet
Development

No branches or pull requests

7 participants