Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support AMD GPUs on Windows #120

Open
34j opened this issue Mar 26, 2023 · 21 comments
Open

Support AMD GPUs on Windows #120

34j opened this issue Mar 26, 2023 · 21 comments
Assignees
Labels
enhancement New feature or request

Comments

@34j
Copy link
Collaborator

34j commented Mar 26, 2023

Is your feature request related to a problem? Please describe.
AMD GPUs not supported on Windows

Describe the solution you'd like
AMD GPUs not supported on Windows

Additional context

@pierluigizagaria
Copy link

I'm trying to get this version working. I've installed the CPU version of torch because there is not installation for ROCm on Win.
After installing with pip install -U git+https://github.com/34j/so-vits-svc-fork.git@feat/openml this error occurs after using svc train

Traceback (most recent call last):
  File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\Scripts\svc.exe\__main__.py", line 7, in <module>
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\__main__.py", line 130, in train
    train(config_path=config_path, model_path=model_path)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\train.py", line 41, in train
    raise RuntimeError("CUDA is not available.")
RuntimeError: CUDA is not available.

After commenting the two lines in train.py

    #if not torch.cuda.is_available():
        #raise RuntimeError("CUDA is not available.")

This is the output of the command. The training does not start at all.

(venv) PS D:\Files\Code\python\so-vits-svc-fork> svc train
[17:19:29] INFO     [17:19:29] Version: 1.3.3                                                                                                                                                   __main__.py:49
Downloading D_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 178M/178M [00:04<00:00, 41.2MiB/s]
Downloading G_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 172M/172M [00:03<00:00, 47.8MiB/s]
(venv) PS D:\Files\Code\python\so-vits-svc-fork> 

@34j
Copy link
Collaborator Author

34j commented Mar 26, 2023

Could you remove that part, pip install torch-openml and try again?

@pierluigizagaria
Copy link

Nothing found, I tried with pip install openml-pytorch but nothing changed

@34j
Copy link
Collaborator Author

34j commented Mar 26, 2023

Understood. I failed.

@pierluigizagaria
Copy link

Sure I did it, in fact no CUDA error is printed, but the command does nothing, it ends straight away

@34j
Copy link
Collaborator Author

34j commented Mar 26, 2023

@34j
Copy link
Collaborator Author

34j commented Mar 26, 2023

Did svc pre-hubert work correctly?

@pierluigizagaria
Copy link

Yes hubert worked but no GPU was used

@pierluigizagaria
Copy link

pierluigizagaria commented Mar 26, 2023

But in this branch realtime inference works on GPU

edit.
Not really

@34j
Copy link
Collaborator Author

34j commented Mar 27, 2023

@allcontributors add pierluigizagaria userTesting

@allcontributors
Copy link
Contributor

@34j

I've put up a pull request to add @pierluigizagaria! 🎉

@voicepaw voicepaw deleted a comment from allcontributors bot Mar 27, 2023
@34j
Copy link
Collaborator Author

34j commented Mar 27, 2023

It seems difficult to support, so I give up.

@pierluigizagaria
Copy link

pierluigizagaria commented Apr 5, 2023

Can this project work with 3.11? I'm trying to install torch-mlir that should make torch compatible my AMD GPU on Windows

I've already tried using torch-directml but got the error mentioned here microsoft/DirectML#400

@34j
Copy link
Collaborator Author

34j commented Apr 7, 2023

> pipdeptree --reverse --packages llvmlite
Warning!!! Possibly conflicting dependencies found:
* poetry==1.4.2
 - platformdirs [required: >=2.5.2,<3.0.0, installed: 3.2.0]
------------------------------------------------------------------------
Warning!! Cyclic dependencies found:
* poetry-plugin-export => poetry => poetry-plugin-export
* poetry => poetry-plugin-export => poetry
------------------------------------------------------------------------
llvmlite==0.39.1
  - numba==0.56.4 [requires: llvmlite>=0.39.0dev0,<0.40]
    - librosa==0.9.1 [requires: numba>=0.45.1]
      - so-vits-svc-fork==3.0.4 [requires: librosa]
      - torchcrepe==0.0.18 [requires: librosa==0.9.1]
        - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
    - resampy==0.4.2 [requires: numba>=0.53]
      - librosa==0.9.1 [requires: resampy>=0.2.2]
        - so-vits-svc-fork==3.0.4 [requires: librosa]
        - torchcrepe==0.0.18 [requires: librosa==0.9.1]
          - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
      - scikit-maad==1.3.12 [requires: resampy>=0.2]
        - so-vits-svc-fork==3.0.4 [requires: scikit-maad]
      - torchcrepe==0.0.18 [requires: resampy]
        - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]

3.10 is not supported for the above reasons, but won't it work with 3.11?

@pierluigizagaria
Copy link

I got an error while trying to install on 3.11

@34j
Copy link
Collaborator Author

34j commented Apr 8, 2023

Sorry, my typo, I was trying to ask if torch-mlir would work with 3.10.

@pierluigizagaria
Copy link

They don't provide compiled Windows versions on 3.10

@34j
Copy link
Collaborator Author

34j commented Apr 8, 2023

Since both inference and training rely on librosa as of now, 3.11 support is not possible.

@34j
Copy link
Collaborator Author

34j commented Apr 30, 2023

Installing the rc version of numba may allow it to be used with Python 3.11, but may cause other problems (I haven't tried it)
(numba/numba#8841)

@robonxt
Copy link

robonxt commented Jun 21, 2023

Would it be possible to run this using directml? (although I've only gotten directml to work on python 3.10.6, haven't tried it on newer versions)

@mikeyang01
Copy link

microsoft/DirectML#400

any update on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants