Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

升级到main分支后发现,没有v0.94版本生成的好了 #180

Open
mybolide opened this issue Jul 11, 2024 · 5 comments
Open

升级到main分支后发现,没有v0.94版本生成的好了 #180

mybolide opened this issue Jul 11, 2024 · 5 comments

Comments

@mybolide
Copy link

升级到mast分支后发现,没有v0.94版本生成的好了
新版本添加Prompt 没效果,我打包了到zip里了,请大佬解惑
Music.zip

@mybolide
Copy link
Author

似乎老版本的更接近真人

@jianchang512
Copy link
Owner

默认选中了 'refine text', 你可以取消试试

@mybolide
Copy link
Author

for result in new_gen:
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\ChatTTS\model\gpt.py", line 570, in generate
for result in new_gen:
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\ChatTTS\model\gpt.py", line 438, in generate
outputs: BaseModelOutputWithPast = self.gpt(
^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 978, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 732, in forward
hidden_states = self.mlp(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chatTTS-ui_source\venv\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 215, in forward
down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB. GPU 0 has a total capacity of 24.00 GiB of which 0 bytes is free. Of the allocated memory 34.59 GiB is allocated by PyTorch, and 3.69 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
我重新拉去代码后,发现提示CUDA超出内存了,显卡是24G的

@jianchang512
Copy link
Owner

显卡要看独立显存容量

另外或许有其他程序也在使用GPU,实际可用不足。

重新试试

@mybolide
Copy link
Author

切换到v0.95 tag上之后 可以执行没有问题,好奇怪

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants