-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shrink Pt. 1: Sparsify #4
Labels
cleaning
Use when cleaning up code, not necessarily changing function
enhancement
New feature or request
Projects
Comments
This was referenced Feb 15, 2019
Open
stephenjfox
added a commit
that referenced
this issue
Feb 18, 2019
PyTorch's boolean comparison crap isn't useful and makes it a pain to test exact tensor values. * Will resume later
Definition of "correct count of neurons":
Is this comment still relevant? |
stephenjfox
added a commit
that referenced
this issue
Feb 19, 2019
stephenjfox
added a commit
that referenced
this issue
Feb 19, 2019
* Attempt to test for #4 PyTorch's boolean comparison crap isn't useful and makes it a pain to test exact tensor values. * Will resume later * Skipping sparsify test It's a painfully simple function that has worked every time I've used it. - No it doesn't handle every edge case + Yes, it gets the job done and can be packaged for the general case * Use instance `.nonzero()` instead of `torch.nonzero()` * Fix "type-check" in layer inspectors * WIP: Implement shrink() in terms of resize_layers() It was as easy as I wanted it to be. * The complexity is how to handle a given nested layer + Those will get implemented with a given feature - Need to program feature detection TODO: + Implement the resizing on a layer-by-layer case, to make the shrinking a bit different + Instead of applying the data transformation uniformly, each layer gets + Those factors will be computed as 1 - percent_waste(layer) * Lay out skeleton for the true shrinking algo #4 * shrink_layer() is simple * Justification for giving Shrinkage a 'input_dimensions' property: > The thought is that channel depth doesn't change the output dimensions for CNNs, and that's attribute we're concerned with in the convulotional case... * Linear layers only have two dimensions, so it's a huge deal there. * RNNs do linear things over 'timesteps', so it's a big deal there. * Residual/identity/skip-connections in CNNs need this. > __It's decided__. The attribute stays
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
cleaning
Use when cleaning up code, not necessarily changing function
enhancement
New feature or request
sparsify(tensor, threshold)
performs as intendedshrink
on a given layer produces the correct count of neurons for a given layernn.Module
I recommend an internal encapsulation to handle this relaying of information.
The text was updated successfully, but these errors were encountered: