Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the demonstration for faster acceleration results in pytorch #2

Closed
seulkiyeom opened this issue May 30, 2018 · 3 comments
Closed

Comments

@seulkiyeom
Copy link

Hi lancopku,

I'm currently implementing your meProp code to understand the flow of the architecture in detail.

However, I couln't see the improved acceleration speed of meprop compared to that of conventional
MLP.

In the table 7 and 8 of paper Sun et al., 2017, pytorch based GPU computation can achieve more faster back-propagation procedure.

Could you please let me know how to implement meprop to show faster backprop computation?

Best,
Seul-Ki

@jklj077
Copy link
Collaborator

jklj077 commented May 31, 2018

Hi,

From what I understand, you're reimplementing the meProp method using your own code. As I do not know your implementation, I can't answer your question in detail. But here are some points that I think could be useful for you:

  • Implement simple unified top-k. The original top-k is not suitable for GPU-style parallel computation.
  • Refrain from using sparse matrix operations. Again, it is not suitable for GPUs to accelerate sparse matrix operations. Instead, extract the columns or rows to build a new matrix and then use normal matrix operations (the provided code uses this implementation).
  • Speedup on GPUs is more meaningful for heavy models that can fully assume the GPU's computational power. The reason is explained in Section 4.8 in the paper. In essence, models of relatively small sizes (e.g. 64 and 512) have almost the same speeds on GPUs.

Best regards

@akaniklaus
Copy link

akaniklaus commented Jan 16, 2019

Dear @jklj077,

Can you please just provide a proper simple unified top-k implementation, which we can plug & play into our codes (as in the following example). Thank you very much for your time and consideration.

Note: The following one works but doesn't result in any speed improvement in the training of the CNN.

def topk(x, k = 0.5):
    original_size = None
    if x.dim() > 2:
        original_size = x.size()
        x = x.view(x.size(0), -1)
    ax = torch.abs(x.data)
    topk, _ = ax.topk(int(x.size(-1)*k))
    topk = topk[:, -1]
    y = x.clone()
    y[ax < topk.repeat(x.size(-1), 1).transpose(0, 1)] = 0
    if original_size: y = y.view(original_size)
    return y

        ....
        self.conv = nn.Conv1d(in_channels, out_channels, **kwargs)
    def forward(self, x):
        return topk(self.conv(x)) if self.training else self.conv(x)

@jklj077
Copy link
Collaborator

jklj077 commented Mar 4, 2019

Dear @jklj077,

Can you please just provide a proper simple unified top-k implementation, which we can plug & play into our codes (as in the following example). Thank you very much for your time and consideration.

Note: The following one works but doesn't result in any speed improvement in the training of the CNN.

def topk(x, k = 0.5):
    original_size = None
    if x.dim() > 2:
        original_size = x.size()
        x = x.view(x.size(0), -1)
    ax = torch.abs(x.data)
    topk, _ = ax.topk(int(x.size(-1)*k))
    topk = topk[:, -1]
    y = x.clone()
    y[ax < topk.repeat(x.size(-1), 1).transpose(0, 1)] = 0
    if original_size: y = y.view(original_size)
    return y
        ....
        self.conv = nn.Conv1d(in_channels, out_channels, **kwargs)
    def forward(self, x):
        return topk(self.conv(x)) if self.training else self.conv(x)

@akaniklaus We're sorry for your inconvenience and we really wish we could help. For linear layers, we actually found a simple way (i.e., simple unified top-k) in the Python side of PyTorch (pre v0.3) that can optimize the speed on GPUs. That is what this repo is doing. For CNNs, there is simply no API we can tweak in the Python side to show a similar effect because all the operations are coded in the C++ side. Plug & play and showing real speedups needs digging really deeper into the PyTorch implementation and writing specific optimization for every different scenario, which is beyond our means.

PS: Actually, the snippet you provided is doing top-k in the forward propagation. You may need to write a backward method for the module class and doing top-k there, if meProp is what you're trying to implement. But it is irrelevant to speed, so I think it doesn't matter now.

Best regards

@jklj077 jklj077 closed this as completed Mar 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants