New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drum GPU #121
Conversation
assert self.phase.shape[0] == self.batch_size | ||
self.phase = nn.Parameter( | ||
data=self.get_parameter("initial_phase"), requires_grad=False | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there is a better way to do this -- making it a parameter makes sure that it gets transferred to the GPU when the module is transferred
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm no don't think so
we would be tuning initial phase but not the phase if we rerun this several times is my understanding.
so this should just be a like:
self.phase = T(self.get_parameter("initial_phase"), requires_grad, device=device)
See https://towardsdatascience.com/7-tips-for-squeezing-maximum-performance-from-pytorch-ca4a40951259 Construct tensors directly on GPUs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the problem is that the parameters haven't been transferred to the GPU by this point. if you call:
vco = TorchSineVCO().to(device)
the constructor is called and self.phase gets created on the CPU. Then all the parameters get moved onto the GPU on the to(device)
call, but self.phase stays on the CPU, which is a problem. By making self.phase a nn.Parameter, then it also gets moved to the GPU on to(device)
cosine_argument = self.make_argument(control_as_frequency) + self.phase | ||
self.phase = cosine_argument[:, -1] | ||
cosine_argument = self.make_argument(control_as_frequency) | ||
cosine_argument += self.phase.unsqueeze(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Later but not now we should figure out if unsqueeze is a view (fast) or clone (slow)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a view: "The returned tensor shares the same underlying data with this tensor."
torchsynth/module.py
Outdated
@@ -757,8 +776,10 @@ def __init__( | |||
# with the mindset that if you are trying to learn to | |||
# synthesize a sound, you won't be adjusting the note_on_duration. | |||
# However, this is easily changed if desired. | |||
# TODO: Fix this later | |||
self.note_on_duration = T([note_on_duration] * synthglobals.batch_size) | |||
# TODO: Fix this later -- should it actually be a parameter? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes link to this issue in code #117
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only should it be a parameter, it should be a TorchKnob
This pull request introduces 39 alerts and fixes 4 when merging c7b88b3 into 7c72212 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request fixes 5 alerts when merging c7b88b3 into a116ed5 - view on LGTM.com fixed alerts:
|
Codecov Report
@@ Coverage Diff @@
## main #121 +/- ##
==========================================
+ Coverage 96.19% 96.21% +0.02%
==========================================
Files 7 7
Lines 657 661 +4
==========================================
+ Hits 632 636 +4
Misses 25 25
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
This pull request introduces 1 alert and fixes 5 when merging c10a1b5 into 254a73e - view on LGTM.com new alerts:
fixed alerts:
|
This pull request fixes 5 alerts when merging 384d379 into 79f6777 - view on LGTM.com fixed alerts:
|
This pull request fixes 5 alerts when merging 470e125 into 3da6730 - view on LGTM.com fixed alerts:
|
This pull request fixes 5 alerts when merging 2395e6d into 3da6730 - view on LGTM.com fixed alerts:
|
Builds on #116 and makes changes for TorchDrum to run on GPU