Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intrinsic Dimensionality #4

Open
rahulvigneswaran opened this issue May 14, 2020 · 3 comments
Open

Intrinsic Dimensionality #4

rahulvigneswaran opened this issue May 14, 2020 · 3 comments

Comments

@rahulvigneswaran
Copy link

rahulvigneswaran commented May 14, 2020

@mdda This repo is amazing. Thank you so much for that. I am trying to play with the Intrinsic Dimensionality code (link). I am not quite able to understand the class IntrinsicDimensionWrapper(torch.nn.Module). Can you walk me through it, especially the for loop in __init__ and forward ?

@mdda
Copy link
Owner

mdda commented May 15, 2020

How familiar are you with the Intrinsic Dimension paper? It's been a while, but I seem to recall the basic idea is that one can replace an existing network parameterisation "W" with one that looks like W_new = W_0 + V*W_expansion where W_0 is just some random initial state, V is a new variable (with a low dimension aka intrinsic_dimension, once we've figured out the lowest sensible size) and W_expansion is a matrix that 'expands' from the V size up to the W_0 size (and can be randomly initialised, since all we care about is that V gets to have influence across a hyperplane in the original parameter space, and that W_0 is within that plane).

We then optimise the new network, but only alter V. If we can get the network to train 'well', then we know that 'V' is big enough - so we can try smaller sizes of 'V'. At some point, the network won't train well, and we know we've gone "too far" in restricting the size of V. Just before that, the size of 'V' is what we'll call the intrinsic_dimension.

So : The IntrinsicDimensionWrapper takes a module (in the notebook I tested on a single Linear layer first, and then a whole MNIST CNN), and goes through all the parameter blocks, replacing them with their initial value, and a dependency on a single 'V'. It then cleans out all the old parameters, so that when PyTorch thinks about optimisation, it only sees 'V'.

Does this make sense? I made the notebook for a presentation I gave in Singapore, a short while after the paper came out : https://blog.mdda.net/ai/2018/05/15/presentation-at-pytorch

Hope this helps
Martin

@rahulvigneswaran
Copy link
Author

rahulvigneswaran commented May 15, 2020

@mdda Thank you so much for the explanation. This solves most of my doubts. In the paper, they have mentioned 3 ways of generating the random matrix (W_expansion) :

  • Dense
  • Sparse
  • Fastfood

From your code, I can understand that you went with the naive dense method for random matrix generation. You have used torch.randn to generate the random matrix of size matrix_size, but why did you divide elementwise by intrinsic_dimension and take a square root?

Also, after the wrapped is applied, the model seems to have only 1 named_parameter(), that is V and all the conv, weight, bias layers disappear from the named_parameters(). I am confused about what you are doing over there. Are you changing the architecture by any chance?

@mdda
Copy link
Owner

mdda commented May 15, 2020

I guess I should first point out that this was hacked together just a few hours before I gave the talk...

But my self-justification for this is that if I've got a vector, and I multiply it by a matrix, there's a kind of 'impedence mismatch' in terms of scaling. To some extent, I'll be adding together things O( V_i ) * N(0,1) * size_of_V. So if the elements of V are "about the right size", then I need to downscale the matrix by the square-root of something relevant... (same would go for the attention-head factor for Transformers).

I'm not claiming this is exactly right, but the factor would be irrelevant after training anyway : I was just trying to slice off an approximate scale factor to enable easier optimisation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants