Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drum GPU #121

Merged
merged 11 commits into from Mar 9, 2021
Merged

Drum GPU #121

merged 11 commits into from Mar 9, 2021

Conversation

jorshi
Copy link
Collaborator

@jorshi jorshi commented Mar 9, 2021

Builds on #116 and makes changes for TorchDrum to run on GPU

assert self.phase.shape[0] == self.batch_size
self.phase = nn.Parameter(
data=self.get_parameter("initial_phase"), requires_grad=False
)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if there is a better way to do this -- making it a parameter makes sure that it gets transferred to the GPU when the module is transferred

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm no don't think so

we would be tuning initial phase but not the phase if we rerun this several times is my understanding.

so this should just be a like:
self.phase = T(self.get_parameter("initial_phase"), requires_grad, device=device)

See https://towardsdatascience.com/7-tips-for-squeezing-maximum-performance-from-pytorch-ca4a40951259 Construct tensors directly on GPUs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the problem is that the parameters haven't been transferred to the GPU by this point. if you call:

vco = TorchSineVCO().to(device)

the constructor is called and self.phase gets created on the CPU. Then all the parameters get moved onto the GPU on the to(device) call, but self.phase stays on the CPU, which is a problem. By making self.phase a nn.Parameter, then it also gets moved to the GPU on to(device)

@turian turian changed the base branch from main to drum2d March 9, 2021 05:48
cosine_argument = self.make_argument(control_as_frequency) + self.phase
self.phase = cosine_argument[:, -1]
cosine_argument = self.make_argument(control_as_frequency)
cosine_argument += self.phase.unsqueeze(1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later but not now we should figure out if unsqueeze is a view (fast) or clone (slow)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a view: "The returned tensor shares the same underlying data with this tensor."

@@ -757,8 +776,10 @@ def __init__(
# with the mindset that if you are trying to learn to
# synthesize a sound, you won't be adjusting the note_on_duration.
# However, this is easily changed if desired.
# TODO: Fix this later
self.note_on_duration = T([note_on_duration] * synthglobals.batch_size)
# TODO: Fix this later -- should it actually be a parameter?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes link to this issue in code #117

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not only should it be a parameter, it should be a TorchKnob

@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request introduces 39 alerts and fixes 4 when merging c7b88b3 into 7c72212 - view on LGTM.com

new alerts:

  • 39 for Non-callable called

fixed alerts:

  • 2 for Non-callable called
  • 1 for Module is imported more than once
  • 1 for Unused import

@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request fixes 5 alerts when merging c7b88b3 into a116ed5 - view on LGTM.com

fixed alerts:

  • 3 for Non-callable called
  • 2 for Variable defined multiple times

@codecov
Copy link

codecov bot commented Mar 9, 2021

Codecov Report

Merging #121 (2395e6d) into main (3da6730) will increase coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #121      +/-   ##
==========================================
+ Coverage   96.19%   96.21%   +0.02%     
==========================================
  Files           7        7              
  Lines         657      661       +4     
==========================================
+ Hits          632      636       +4     
  Misses         25       25              
Flag Coverage Δ
pytest 55.13% <10.00%> (-0.24%) ⬇️
unittests 93.19% <100.00%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
examples/examples.py 99.45% <100.00%> (+<0.01%) ⬆️
torchsynth/module.py 96.80% <100.00%> (+0.02%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3da6730...2395e6d. Read the comment docs.

@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request introduces 1 alert and fixes 5 when merging c10a1b5 into 254a73e - view on LGTM.com

new alerts:

  • 1 for Unused import

fixed alerts:

  • 3 for Non-callable called
  • 2 for Variable defined multiple times

@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request fixes 5 alerts when merging 384d379 into 79f6777 - view on LGTM.com

fixed alerts:

  • 3 for Non-callable called
  • 2 for Variable defined multiple times

Base automatically changed from drum2d to main March 9, 2021 16:53
@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request fixes 5 alerts when merging 470e125 into 3da6730 - view on LGTM.com

fixed alerts:

  • 3 for Non-callable called
  • 2 for Variable defined multiple times

@jorshi jorshi merged commit 8a40127 into main Mar 9, 2021
@jorshi jorshi deleted the drum-gpu branch March 9, 2021 17:27
@lgtm-com
Copy link

lgtm-com bot commented Mar 9, 2021

This pull request fixes 5 alerts when merging 2395e6d into 3da6730 - view on LGTM.com

fixed alerts:

  • 3 for Non-callable called
  • 2 for Variable defined multiple times

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants