Here is the implementation of Maxout Layer from paper: https://arxiv.org/pdf/1302.4389.pdf in PyTorch. In forward pass, maxout for the feature maps of CNN is calculated. For backward pass, derivative of error w.r.t input is only propagated through the cells of feature maps array which were activated in forward pass(which is derivative of Maxout). To be sure this works fine, results were compared with Maxout in Theano(which is with the name featurepool layer) and were confirmed to be working fine. To make it work according to your desire, just change max_out parameter in forward function. The code is very efficient is fast.
-
Notifications
You must be signed in to change notification settings - Fork 3
Here is the implementation of Maxout Layer from paper: https://arxiv.org/pdf/1302.4389.pdf in PyTorch. In forward pass, maxout for the feature maps of CNN is calculated. For backward pass, derivative of error w.r.t input is only propagated through the cells of feature maps array which were activated in forward pass(which is derivative of Maxout)…
Usama113/Maxout-PyTorch
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Here is the implementation of Maxout Layer from paper: https://arxiv.org/pdf/1302.4389.pdf in PyTorch. In forward pass, maxout for the feature maps of CNN is calculated. For backward pass, derivative of error w.r.t input is only propagated through the cells of feature maps array which were activated in forward pass(which is derivative of Maxout)…
Resources
Stars
Watchers
Forks
Releases
No releases published