-
Notifications
You must be signed in to change notification settings - Fork 74.2k
-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.keras.estimator.estimator_from_model does not respect options set in RunConfig #14776
Comments
@yifeif could you please take a look. |
@ispirmustafa any idea? The run_config is passed in directly when creating the keras version of the Estimator. Do we need to pass these configurations anywhere else? |
It should not be related with Keras code. It should work since it is handled within Estimator. |
Sure thing, here is the output: *********** est_keras.config *************************************
<tensorflow.python.estimator.run_config.RunConfig object at 0x7f7694423fd0>
*********** est_keras.config.cluster_spec.as_dict() *************
{}
****************************************************************** Code to reproduce (same as the source in the origional issue above, but with print statements after the creation of the keras estimator), and full log output are avaliable here: https://gist.github.com/droidicus/146532eacf88ed57538bb41a8fc7da4b |
It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly. |
Gentle ping, this is still an issue for me. |
Hi @shivaniag, |
FYI, I've checked the keras.model_to_estimator. It's sending the config properly to tf.estimator.Estimator. |
Just as an FYI, while this is still a problem in TFv1.5rc0 we were able to do the following as a workaround for now, by setting the default session manually the memory fraction is respected: import os
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5)
sess_config = tf.ConfigProto(gpu_options=gpu_options)
# Manually set the default session instead
tf.Session(config=sess_config).as_default()
#run_config = tf.estimator.RunConfig(session_config=sess_config)
inputs = tf.keras.layers.Input(shape=(10,))
outputs = tf.keras.layers.Dense(10)(inputs)
model = tf.keras.models.Model(inputs, outputs)
model.compile(optimizer='sgd', loss='mse')
est_keras = tf.keras.estimator.model_to_estimator(keras_model=model)#, config=run_config)
input_name = model.input_names[0]
data = np.random.rand(1000,10).astype(np.float32)
train_input_fn = tf.estimator.inputs.numpy_input_fn({input_name:data}, data, batch_size=10, num_epochs=None, shuffle=False)
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=100000)
eval_spec = tf.estimator.EvalSpec(input_fn=train_input_fn, steps=10)
tf.estimator.train_and_evaluate(est_keras, train_spec, eval_spec) |
A member of the TensorFlow organization has replied after the stat:awaiting tensorflower label was applied. |
Nagging Assignee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
Sorry for the delay. This is how model_to_estimator create its model_fn. @ispirmustafa anything you see with session that should be done differently? Thanks! Also tried print out ` |
Nagging Assignees @yifeif, @shivaniag, @ispirmustafa: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
A fix has been submitted internally and should make to master tomorrow. Thanks! |
Fantastic, thanks! |
System information
Describe the problem
When trying to use an estimator that is derived from
tf.keras.estimator.estimator_from_model()
and training withtf.estimator.train_and_evaluate()
, settinggpu_options
in thesession_config
oftf.estimator.RunConfig
does not cause the settings to be respected when passed to the estimator_from_model function. For example settingper_process_gpu_memory_fraction=0.5
does not decrease the memory allocated to the process on the GPU, similarly settingallow_growth=True
continues to allocate all of the memory and does not allow memory growth.I also tested this with the canned estimator
tf.estimator.DNNRegressor
, and the settings were applied as expected when the RunConfig was passed to the estimator.Below is code to demonstrate this issue.
Source code / logs
Minimal example, runs to completion and trains successfully. But, changing the GPUOptions settings does not cause the GPU memory to be utilized as expected:
The text was updated successfully, but these errors were encountered: