New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda Error with low_mem #36
Cuda Error with low_mem #36
Comments
What does |
Without the low_mem flag, it says that all tests pass. With the low_mem flag, I get the same CUDA_ERROR_NOT_INITIALIZED. |
Hmm, I don't have a solution right now. I'll have to look into it. |
Same problem. "demo_cli.py" output without "--low_mem" flag: "demo_cli.py" output with "--low_mem" flag:
|
I have the same error |
Can you try pulling from this new branch and see what gives? |
New branch code without "--low_mem" flag get stucked on "Interactive generation loop" after "all test passed".
With "--low_mem" flag same cuda init error. |
I still run into the same issue as well with the |
When you get stucked on "Interactive generation loop", you should input your own audio file, then the code will work, you can read the source code . But I don't know why the message didn't work. |
I have the same error with 2.1G GPU! |
I can't reproduce this unfortunately, I'm running a win10 install on two computers (one with 2gb VRAM) and tensorflow-gpu=1.13.1. I'm going to need more information to look into it. |
I am running Linux on both computers. Ubuntu 18.04 with nvidia-430 and Cuda 10.2 (2gb VRAM) and Linux Mint 19.1 with nvidia-410 and Cuda 10.0 (4gb VRAM) on the other. I have been playing around a bit with the code and it seems that in |
I've found out why and how to fix it: multiprocess uses forked workers by default, which inherits some state CUDA isn't expected. Switching to spawned workers fixes it. patchdiff --git a/synthesizer/inference.py b/synthesizer/inference.py
index 99fb778..b9cc9c0 100644
--- a/synthesizer/inference.py
+++ b/synthesizer/inference.py
@@ -2,12 +2,12 @@ from synthesizer.tacotron2 import Tacotron2
from synthesizer.hparams import hparams
from multiprocess.pool import Pool # You're free to use either one
#from multiprocessing import Pool #
+from multiprocess.context import SpawnContext
from synthesizer import audio
from pathlib import Path
from typing import Union, List
import tensorflow as tf
import numpy as np
-import numba.cuda
import librosa
@@ -80,13 +80,15 @@ class Synthesizer:
# Low memory inference mode: load the model upon every request. The model has to be
# loaded in a separate process to be able to release GPU memory (a simple workaround
# to tensorflow's intricacies)
- specs, alignments = Pool(1).starmap(Synthesizer._one_shot_synthesize_spectrograms,
- [(self.checkpoint_fpath, embeddings, texts)])[0]
+ specs, alignments = Pool(1, context=SpawnContext()
+ ).starmap(Synthesizer._one_shot_synthesize_spectrograms,
+ [(self.checkpoint_fpath, embeddings, texts)])[0]
return (specs, alignments) if return_alignments else specs
@staticmethod
def _one_shot_synthesize_spectrograms(checkpoint_fpath, embeddings, texts):
+ import numba.cuda
# Load the model and forward the inputs
tf.reset_default_graph()
model = Tacotron2(checkpoint_fpath, hparams) |
I've implemented it in a rush on the |
|
I've got the same problem and can confirm that @lilydjwg's solution solved it. |
* For low_mem, use spawned workers instead of forked workers (resolves #36) Used implementation from @lilydjwg: CorentinJ/Real-Time-Voice-Cloning#36 (comment) * Different method of passing the seed for low_mem inference Resolves #491, #529, #535
Hi,
I have come across the following error when using the toolbox in low memory mode.
On this computer, my GPU only has 2GB so I need to use this mode.
I have tested this on another computer that has a GPU with 4GB RAM. The toolbox works perfectly in normal mode but when I turn on low_mem, I run into the same error.
I'm not sure what other information you would need to look into this so please let me know what else I can provide to help out.
The text was updated successfully, but these errors were encountered: