Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Module 'tensorflow' has no attribute 'placeholder' #403

Open
UnforeseenOcean opened this issue May 22, 2021 · 8 comments
Open

Module 'tensorflow' has no attribute 'placeholder' #403

UnforeseenOcean opened this issue May 22, 2021 · 8 comments

Comments

@UnforeseenOcean
Copy link

I'm using Anaconda3 and the latest version of this repository. I have manually installed librosa and TensorFlow (following Anaconda tutorial).
Environment: Anaconda Prompt (Windows 10), TensorFlow set up as "tf" using conda create -n tf tensorflow

See attached picture for the error. The same happens when using tf-gpu.
IMG1621652119

I'm sure I did something wrong but I don't know what it is.

@UnforeseenOcean
Copy link
Author

UnforeseenOcean commented May 22, 2021

Edit: Seems like this code does not like TensorFlow 2. I will attempt the workaround I found and report back.

Note: Reinstalling TensorFlow with a lower version does not seem to work because the Python version is too high.

Update: Patching the script with the following works to some extent, but causes model.py to fail with AttributeError: module 'tensorflow.compat.v1' has no attribute 'contrib':

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior() 

Update:

Make a new env first on Anaconda (installing onto base version WILL break your install) using:

conda create -n tf python=3.7 pip
conda activate tf

then run:

pip install "tensorflow-gpu<2.0"
pip install "librosa<0.7.0"

@UnforeseenOcean
Copy link
Author

Now it crashes because cuDNN handle could not be created. Is there a way to limit memory usage? (have 6GB VRAM, NVIDIA GTX 1660. Hits maximum memory use immediately)

@UnforeseenOcean
Copy link
Author

UnforeseenOcean commented May 22, 2021

Forcing the CUDA instance to use less memory by patching the train.py script seemed to work a little bit, until it also crashed with tons of memory allocation error despite having more than 3GB of free memory available.

# Set up session
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5)
    sess = tf.Session(config=tf.ConfigProto(log_device_placement=False, gpu_options=gpu_options))
    init = tf.global_variables_initializer()
    sess.run(init)

@UnforeseenOcean
Copy link
Author

This keeps happening. What am I doing wrong? Is it the CUDA version mismatch?
IMG1621680211

@ljuvela
Copy link

ljuvela commented May 22, 2021

Your sequence length seems fairly long (87040), that could cause the OOM. Try chunking the audio to around one second segments.

Those type errors could be related to TF1 / TF2 discrepancies

@UnforeseenOcean
Copy link
Author

UnforeseenOcean commented May 22, 2021

I tried only the shortest clips, and it seems to work. What could be the maximum length per voice sample?

Currently using 5 seconds or less per clip at 24000Hz. (total: 592)

@ljuvela
Copy link

ljuvela commented May 22, 2021

I’d say the simplest way to find out is just to experiment. No need to use actual data, you could just generate random tensors of specific lengths.

Calculating memory consumption is not straightforward, since it depends also on model size (need to store activations and gradients, which scales in both sequence length and network depth)

@SerafimC
Copy link

SerafimC commented Sep 18, 2021

Just to share my experience, I am using a dataset with one sentence per voice sample. So the audios have around 3 to 5 seconds each.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants