-
Notifications
You must be signed in to change notification settings - Fork 45.4k
Description
It throws the following InvalidArgumentError at the beginning of training a Transformer model (models/official/transformer/transformer_main.py) using CPUs.
The training steps is on:
https://github.com/tensorflow/models/tree/master/official/transformer
Full Error Trace is as follows:
2018-07-25 16:16:27.734555: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
I0725 16:16:28.569362 139892177725248 tf_logging.py:115] Benchmark run: {'model_name': 'transformer', 'dataset': {'name': 'wmt_translate_ende'}, 'machine_config': {'cpu_info': {'num_cores': 112, 'cpu_info': 'Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz', 'mhz_per_cpu': 2500.0}, 'gpu_info': {'count': 0}, 'memory_total': 201245483008, 'memory_available': 179603804160}, 'test_id': None, 'run_date': '2018-07-25T23:16:27.746402Z', 'tensorflow_version': {'version': '1.9.0', 'git_hash': 'v1.9.0-0-g25c197e023'}, 'tensorflow_environment_variables': [], 'run_parameters': [{'name': 'allow_ffn_pad', 'bool_value': 'True'}, {'name': 'alpha', 'float_value': 0.6}, {'name': 'attention_dropout', 'float_value': 0.1}, {'name': 'batch_size', 'long_value': 32768}, {'name': 'beam_size', 'long_value': 4}, {'name': 'data_dir', 'string_value': '$HOME/transformer/data'}, {'name': 'default_batch_size', 'long_value': 2048}, {'name': 'default_batch_size_tpu', 'long_value': 32768}, {'name': 'extra_decode_length', 'long_value': 50}, {'name': 'filter_size', 'long_value': 2048}, {'name': 'hidden_size', 'long_value': 512}, {'name': 'initializer_gain', 'float_value': 1.0}, {'name': 'label_smoothing', 'float_value': 0.1}, {'name': 'layer_postprocess_dropout', 'float_value': 0.1}, {'name': 'learning_rate', 'float_value': 2.0}, {'name': 'learning_rate_decay_rate', 'float_value': 1.0}, {'name': 'learning_rate_warmup_steps', 'long_value': 16000}, {'name': 'max_length', 'long_value': 256}, {'name': 'model_dir', 'string_value': '$HOME/logs/transformer/model_base'}, {'name': 'num_heads', 'long_value': 8}, {'name': 'num_hidden_layers', 'long_value': 6}, {'name': 'num_parallel_calls', 'long_value': 112}, {'name': 'optimizer_adam_beta1', 'float_value': 0.9}, {'name': 'optimizer_adam_beta2', 'float_value': 0.997}, {'name': 'optimizer_adam_epsilon', 'float_value': 1e-09}, {'name': 'relu_dropout', 'float_value': 0.1}, {'name': 'repeat_dataset', 'long_value': 1}, {'name': 'static_batch', 'bool_value': 'False'}, {'name': 'tpu', 'string_value': 'None'}, {'name': 'use_synthetic_data', 'bool_value': 'False'}, {'name': 'use_tpu', 'bool_value': 'False'}, {'name': 'vocab_size', 'long_value': 33708}]}
I0725 16:16:32.669298 139892177725248 tf_logging.py:115] Using config: {'_model_dir': '$HOME/logs/transformer/model_base', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.one_device_strategy.OneDeviceStrategy object at 0x7f397c1b69e8>, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f397c1b6a58>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
I0725 16:16:32.672403 139892177725248 tf_logging.py:115] Training schedule:
I0725 16:16:32.672613 139892177725248 tf_logging.py:115] 1. Train for 1 epochs.
I0725 16:16:32.672733 139892177725248 tf_logging.py:115] 2. Evaluate model.
I0725 16:16:32.672834 139892177725248 tf_logging.py:115] 3. Compute BLEU score.
I0725 16:16:32.672941 139892177725248 tf_logging.py:115] Repeat above steps until the BLEU score reaches 25.000000
I0725 16:16:32.675400 139892177725248 tf_logging.py:115] Starting iteration 1
I0725 16:16:32.763708 139892177725248 tf_logging.py:115] Calling model_fn.
$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
I0725 16:16:45.845091 139892177725248 tf_logging.py:115] Done calling model_fn.
I0725 16:16:46.302198 139892177725248 tf_logging.py:115] Create CheckpointSaverHook.
I0725 16:16:49.258668 139892177725248 tf_logging.py:115] Graph was finalized.
I0725 16:16:49.283038 139892177725248 tf_logging.py:115] Restoring parameters from $HOME/logs/transformer/model_base/model.ckpt-0
I0725 16:16:53.075307 139892177725248 tf_logging.py:115] Running local_init_op.
I0725 16:16:53.208708 139892177725248 tf_logging.py:115] Done running local_init_op.
I0725 16:17:01.174737 139892177725248 tf_logging.py:115] Saving checkpoints for 0 into $HOME/logs/transformer/model_base/model.ckpt.
Traceback (most recent call last):
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "transformer_main.py", line 632, in
absl_app.run(main)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 274, in run
_run_main(main, args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 238, in _run_main
sys.exit(main(argv))
File "transformer_main.py", line 626, in main
run_transformer(flags.FLAGS)
File "transformer_main.py", line 608, in run_transformer
vocab_file=flags_obj.vocab_file)
File "transformer_main.py", line 332, in run_loop
hooks=train_hooks)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 366, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1117, in _train_model
return self._train_model_distributed(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1253, in _train_model_distributed
saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1336, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 577, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1053, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1144, in run
raise six.reraise(*original_exc_info)
File "$HOME/.local/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1129, in run
return self._sess.run(*args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1201, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 981, in run
return self._sess.run(*args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]
Caused by op 'model/Transformer/encode/embedding_shared_weights/embedding/Gather', defined at:
File "transformer_main.py", line 632, in
absl_app.run(main)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 274, in run
_run_main(main, args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 238, in _run_main
sys.exit(main(argv))
File "transformer_main.py", line 626, in main
run_transformer(flags.FLAGS)
File "transformer_main.py", line 608, in run_transformer
vocab_file=flags_obj.vocab_file)
File "transformer_main.py", line 332, in run_loop
hooks=train_hooks)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 366, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1117, in _train_model
return self._train_model_distributed(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1160, in _train_model_distributed
self.config)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/distribute.py", line 794, in call_for_each_tower
return self._call_for_each_tower(fn, *args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/contrib/distribute/python/one_device_strategy.py", line 77, in _call_for_each_tower
return fn(*args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "transformer_main.py", line 78, in model_fn
logits = model(inputs, targets)
File "$HOME/models/official/transformer/model/transformer.py", line 91, in call
encoder_outputs = self.encode(inputs, attention_bias)
File "$HOME/models/official/transformer/model/transformer.py", line 114, in encode
embedded_inputs = self.embedding_softmax_layer(inputs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 329, in call
outputs = super(Layer, self).call(inputs, *args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 703, in call
outputs = self.call(inputs, *args, **kwargs)
File "$HOME/models/official/transformer/model/embedding_layer.py", line 76, in call
embeddings = tf.gather(self.shared_weights, x)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2664, in gather
return params.sparse_read(indices, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 767, in sparse_read
self._handle, indices, dtype=self._dtype, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 586, in resource_gather
validate_indices=validate_indices, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]