New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate using dimensionality-reduction methods on outputs #45
Comments
It is possible, check out the figure below (Fig. 1.B in [1]): I would say that it is a good idea to stick with an expert-defined set of summary features for now, it would be straightforward to implement this automatic approach if necessary. [1] Cranmer, Breher and Louppe. PNAS (2020) 117:30055-30062 |
Thanks for looking into this and the references, I'll try to have a closer look soon. Since this is non-trivial (but very interesting!), I agree that we should focus on user-provided summary features first. Interesting note about applying the network directly to the high-dimensional data, this might be something that we can try out easily rather soon. |
Best regards I am trying to make a comparison of the incremental versions of dimensionality reduction methods. Could someone tell me where I can find the code for Incremental Locally Linear Embedding, or Incremental Multidimensional Scaling or Incremental Laplacian EigenMaps ? Thank you. |
Hi @endimeon777, Regarding the code for the techniques you mentioned, I really have no idea. However, I am not sure if those methods are even applicable for the data we are handling here. Best, |
Hi @mstimberg . I've decided to play around with the issue of using raw data directly without feature extraction (see my first comment):
The authors of the mentioned study [Gonçalves et al., 2020], state the following:
The procedure is as follows: when SNPE is used in combination with MDN, MDN is augmented with a RNN which runs along the roecorded voltage trace to learn appropriate features and thus constrain the model parameters. I am not sure if this happens automatically when the output dimension is large, but it works. In model fitting toolbox, we could probably check whether the neural posterior class belongs to SNPE and whether the density estimator function is MDN. If this is true, then users does not have to provide list (or dictionary) of features w.r.t. which the inference will be performed if they are only interested in "fitting" the parameters and no special features are of interest to them. |
Continuation of the discussion started in the previous comment: sbi-dev/sbi#527 + some additional info. Since I will deal with #53, I can also enable empty list/dict for |
With this last PR merged in |
Instead of asking the user to provide metrics to extract features, it might be possible to automatically reduce the dimensionality of the output (e.g. voltage trace)?
The text was updated successfully, but these errors were encountered: