-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic feature extraction #527
Comments
Hi there, thanks for reading out! I had a brief look at your notebook, I do not think that you are doing anything wrong. A few thoughts:
I hope that helps! |
Yes, it helps, thank you very much.
Thank you again. Best, |
Best |
It is so good to see a paper with <15 pages for a change. Thanks for all the information, really helpful! Best, |
Hi,
In [Goncalves et al., 2020] it is stated that: "SNPE can be applied to, and might benefit from the use of summary features, but it also makes use of the ability of neural networks to automatically learn informative features in high-dimensional data. Thus, SNPE can also be applied directly to raw data (e.g. using recurrent neural networks [Lueckmann et al., 2017]), ...".
The work by [Lueckmann et al., 2017] is related to the method SNPE_B, at least according to
sbi
documentation, however, in the version ofsbi
I am currently using (0.16.0), is stated that mentioned inference algorithm in currently not implemented.Nevertheless, I have been playing around with SNPE (or, more precisely, SNPE_C) and raw data and it seems to work quite well for a very simple example similar to that in
brian2
official example directory, available here.In this example, the Hodgkin-Huxley neuron model is used to test the ability of simulation-based inference and the possibility of integration with
brian2
. It is based on a fake current-clamp recording generated from the same model that has been used in the inference process. Two of the parameters (the maximum sodium and potassium conductivity values) are considered unknown and are proceed to be inferred from the data.The first thing I tried is using embedding network as the way to semi-automatically extract relevant features. This embedding network is based on Time2Vec, which is, in a nutshell, a very simple sinusoidal layer:
and according to r/MachineLearning commentators is nothing but a "quality case of 'just throw neural networks at it' and is overall just a shitty rehashing of discrete Fourier transforms".
In the original paper for Time2Vec authors use this sine representation of the input just as the additional layer to LSTM or GRU and it seems to produce better results than vanilla reccurent networks, but in the case I've been working on, it does not seem to work well.
Next approach was applying raw data output (generated voltage traces), x, of size (10000, 7000) to SNPE. It works extremely well and is comparable (if not better) than the situation where I have used summary statistics consisted of mean and std of the active potential, number of spikes and maximum value of the membrane potential from generated traces.
The thing that I do not understand is, how is this possible? Am I doing something wrong or this SNPE_C approach is able to automatically extract features from the data even though
embedding_net
is still set toNone
.In [Lueckmann et al., 2017], in subsection 2.3. under Learning Features, it is stated that in the cases when time-series recordings are directly fed into the network, the first layer of the MDN becomes a recurrent layer instead of a fully connected one. But even with different methods, such as NSF for example, I've been able to obtain good results even though much slower.
The notebook is available here.
Sorry for this long text 😬
refs.
Goncalves et al. eLife 2020; 9:e56261 available online
Lueckmann et al. in proceeding of NIPS 2017 available online
The text was updated successfully, but these errors were encountered: