Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On all examples I get the error: " doesn't work when executing eagerly" #29

Closed
elich11 opened this issue Jun 21, 2020 · 1 comment
Closed

Comments

@elich11
Copy link

elich11 commented Jun 21, 2020

Using the latest tf (2.2) on ubunto 20.04 LTS

Here is the full log:

python3 progressive_gan/train_main.py --alsologtostderr
WARNING:tensorflow:From /home/elich11/.local/lib/python3.8/site-packages/tensorflow_gan/python/estimator/tpu_gan_estimator.py:42: The name tf.estimator.tpu.TPUEstimator is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimator instead.

I0621 15:34:41.329408 139934777608000 train_main.py:156] dataset_file_pattern=
start_height=4
start_width=4
scale_base=2
num_resolutions=4
batch_size_schedule=[8, 8, 4]
kernel_size=3
colors=3
to_rgb_use_tanh_activation=False
stable_stage_num_images=1000
transition_stage_num_images=1000
total_num_images=10000
save_summaries_num_images=100
latent_vector_size=128
fmap_base=4096
fmap_decay=1.0
fmap_max=128
gradient_penalty_target=1.0
gradient_penalty_weight=10.0
real_score_penalty_weight=0.001
generator_learning_rate=0.001
discriminator_learning_rate=0.001
adam_beta1=0.0
adam_beta2=0.99
fake_grid_size=8
interp_grid_size=8
train_log_dir=/tmp/tfgan_logdir/progressive_gan/
master=
ps_replicas=0
task=0
2020-06-21 15:34:41.425124: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-06-21 15:34:41.434331: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2020-06-21 15:34:41.440051: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (DESKTOP-IB8RMFG): /proc/driver/nvidia/version does not exist
2020-06-21 15:34:41.443690: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-21 15:34:41.477946: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2592000000 Hz
2020-06-21 15:34:41.485555: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f4444000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-21 15:34:41.486501: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
I0621 15:34:41.505413 139934777608000 dataset_info.py:361] Load dataset info from /home/elich11/tensorflow_datasets/cifar10/3.0.2
I0621 15:34:41.520289 139934777608000 dataset_builder.py:282] Reusing dataset cifar10 (/home/elich11/tensorflow_datasets/cifar10/3.0.2)
I0621 15:34:41.521336 139934777608000 dataset_builder.py:477] Constructing tf.data.Dataset for split train, from /home/elich11/tensorflow_datasets/cifar10/3.0.2
2020-06-21 15:34:52.018620: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead.
2020-06-21 15:34:52.026975: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead.
Traceback (most recent call last):
File "progressive_gan/train_main.py", line 172, in
tf.app.run()
File "/home/elich11/.local/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/elich11/.local/lib/python3.8/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/elich11/.local/lib/python3.8/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "progressive_gan/train_main.py", line 166, in main
model = train.build_model(stage_id, batch_size, real_images, **config)
File "/home/elich11/.local/lib/python3.8/site-packages/tensorflow_gan/examples/progressive_gan/train.py", line 348, in build_model
gan_model = tfgan.gan_model(
File "/home/elich11/.local/lib/python3.8/site-packages/tensorflow_gan/python/train.py", line 102, in gan_model
raise ValueError('tfgan.gan_model doesn't work when executing eagerly.')
ValueError: tfgan.gan_model doesn't work when executing eagerly.

@Arpit2601
Copy link

I'm also getting the same error. Any updates on this issue?

joel-shor added a commit that referenced this issue Jul 31, 2020
Fixes: #29
#31
PiperOrigin-RevId: 322138125
Change-Id: I8569e5d5907cd2e32189729d850cec8db671378c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants