New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding the demonstration for faster acceleration results in pytorch #2
Comments
Hi, From what I understand, you're reimplementing the meProp method using your own code. As I do not know your implementation, I can't answer your question in detail. But here are some points that I think could be useful for you:
Best regards |
Dear @jklj077, Can you please just provide a proper simple unified top-k implementation, which we can plug & play into our codes (as in the following example). Thank you very much for your time and consideration. Note: The following one works but doesn't result in any speed improvement in the training of the CNN.
|
@akaniklaus We're sorry for your inconvenience and we really wish we could help. For linear layers, we actually found a simple way (i.e., simple unified top-k) in the Python side of PyTorch (pre v0.3) that can optimize the speed on GPUs. That is what this repo is doing. For CNNs, there is simply no API we can tweak in the Python side to show a similar effect because all the operations are coded in the C++ side. Plug & play and showing real speedups needs digging really deeper into the PyTorch implementation and writing specific optimization for every different scenario, which is beyond our means. PS: Actually, the snippet you provided is doing top-k in the forward propagation. You may need to write a backward method for the module class and doing top-k there, if meProp is what you're trying to implement. But it is irrelevant to speed, so I think it doesn't matter now. Best regards |
Hi lancopku,
I'm currently implementing your meProp code to understand the flow of the architecture in detail.
However, I couln't see the improved acceleration speed of meprop compared to that of conventional
MLP.
In the table 7 and 8 of paper Sun et al., 2017, pytorch based GPU computation can achieve more faster back-propagation procedure.
Could you please let me know how to implement meprop to show faster backprop computation?
Best,
Seul-Ki
The text was updated successfully, but these errors were encountered: