Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(inference): remove weight norm on inference so mps backend will work without CPU fallback #783

Merged
merged 1 commit into from
Jul 2, 2023

Conversation

shenberg
Copy link
Contributor

Description of change

Added code to remove weight norms from hubert and net_g (inference only for net_g), because at least as of now (PyTorch 2.0.1), the mps backend in pytorch does not support weight norm.

Pull-Request Checklist

  • Code is up-to-date with the main branch
  • This pull request follows Contributing.md
  • pre-commit run -a passes with this change or ci passes
  • poetry run pytest passes with this change or ci passes
  • (There are new or updated unit tests validating the change) - it's a bit tricky to test. If the inference tests were active they would test this.
  • The new commits follow conventions outlined in the conventional commit spec

@shenberg shenberg changed the title fix(inference): remove weight norm on inference so metal backend will… fix(inference): remove weight norm on inference so mps backend will work without CPU fallback Jun 22, 2023
@codecov
Copy link

codecov bot commented Jun 22, 2023

Codecov Report

Patch coverage: 5.55% and project coverage change: -0.06 ⚠️

Comparison is base (9ebc6fe) 19.43% compared to head (dfe0028) 19.38%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #783      +/-   ##
==========================================
- Coverage   19.43%   19.38%   -0.06%     
==========================================
  Files          39       39              
  Lines        3452     3467      +15     
  Branches      484      489       +5     
==========================================
+ Hits          671      672       +1     
- Misses       2763     2777      +14     
  Partials       18       18              
Impacted Files Coverage Δ
src/so_vits_svc_fork/inference/core.py 18.25% <0.00%> (-0.30%) ⬇️
src/so_vits_svc_fork/utils.py 21.09% <7.69%> (-0.54%) ⬇️

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@34j
Copy link
Collaborator

34j commented Jun 25, 2023

Will this change work as before for other devices? If you don't have great confidence, although the code will be messy, I would like you to add if torch.backends.mps.is_available() to all parts of the modified code (because if the code doesn't work, it will be crucial). Thanks for your PR.

@shenberg
Copy link
Contributor Author

shenberg commented Jun 25, 2023

I tried it with cpu backend as well and I'm pretty confident it'll work with no issues in CUDA either, but I haven't tested it on CUDA as I'm traveling and don't have access to a cuda machine at the moment

@34j
Copy link
Collaborator

34j commented Jun 25, 2023

I don't think there are that many MPS users compared to CUDA users, so I would like to wait patiently until someone reports it works or you could enclose the changes in if statements. (Sorry but I don't have the energy to test this.)

@34j
Copy link
Collaborator

34j commented Jul 2, 2023

I am ashamed to say that I did not understand about weight norm. I guess it should be excluded when inference as you implied.

@34j 34j merged commit 39ea0bc into voicepaw:main Jul 2, 2023
7 of 9 checks passed
@34j
Copy link
Collaborator

34j commented Jul 5, 2023

@allcontributors add shenberg userTesting, ideas, code

@allcontributors
Copy link
Contributor

@34j

I've put up a pull request to add @shenberg! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants