-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
complex values in HGF output #248
Comments
Hi Vae - just vey quickly, scanning your problem, I suspect that at some point you're taking the log (or square root) of a negative number, which gives you a complex result. This probably happens because you're using RT's instead of log-RT's. When you use a Gaussian model with RT's, negative values are possible according to that model, which doesn't make sense. The solution is to use either a log-Gaussian model with RT's or a Gaussian model with log-RT's |
Hi @chmathys Thanks for the reply! I have generated another set of parameters using logrt, and all the parameters look fine :) Thank you for your help. Another issue is that I would like to include the subjects' actual response (categorical) alongside the RT and stimulus train when using HGF. I came across a poster from your lab at CCN this year discussing this exact application. Is the script for this already available in tapas? Lastly, I still can't figure out how to get the equations for expected uncertainty and unexpected uncertainty in the logrt_linear_whatworld response model, could you give me some hints? below are the equations that I am referring to:
Thank you so much!!! Regards, |
Hi Vae - @alexjhess might be able to help you with multimodal responses. Regarding the uncertainty calculations, much of that is about finding the right values in multidimensional arrays. I suggest using the debugger to look at how this unfolds step by step. The only substantive thing that happens is that once we have the desired values (euos23 and exp(mu3)), we need to transform them down to the 1st level so they are on the same scale as the other predictors in the linear model. An explanation of this transformation is in the supplementary material to Iglesias et al. (2013) https://doi.org/10.1016/j.neuron.2013.09.009. |
Hi @Heechberri, All the functionality you need to fit your models to multiple response data modalities is already implemented in tapas. The most straightforward way to build a customized response model for your application is to create a new obs model where you essentially sum the log likelihood of separate response data modalities. There is no concrete example model implemented in the toolbox yet, maybe we can include one in the next release (@chmathys). Anyways, we hope to be able to upload a preprint of the work you were referring to including example code and models by the end of this month. Feel free to contact me via hess@biomed.ee.ethz.ch in case of delays or if you're in desperate need of example code in the mean time. Hope this is helpful. All the best, |
Thank you for referring me to Igelesias et al supplementary, it really helped me understand more of the transforms. How about those variables that are calculated from two different levels? If I wanted to transform precision weight at level 3 (which is @alexjhess Here are the code sniplets farom fitmodel.m that I am talking about.
Hahaha... as I am a novice coder, I am not confident in coding these by myself so I think I will still email you for the sample code soon! Thank you sooo much for your help! looking forward to the publication!! Regards, |
No, I was thinking of something even simpler than modifying the fitModel.m function, namely creating a new response model that consists of a combination of several response models for different data stream. Your function could look something like this:
Of course you would need to add the corresponding files '_config.m', '_transp.m', '_namep.m' and Hope this helps, otherwise shoot me an e-mail. :) |
Hi @Heechberri, I just wanted to let you know that we have uploaded a preprint introducing some example HGF response models that simultaneously model binary choices and continuous RTs. You can check it out at https://www.biorxiv.org/content/10.1101/2024.02.19.581001v1 and there are also links to code and data included. Cheers! |
Hi!
Thanks!
This is awesome!
Regards,
Vae
…On Fri, 23 Feb 2024 at 6:26 PM, alexjhess ***@***.***> wrote:
Hi @Heechberri <https://github.com/Heechberri>,
I just wanted to let you know that we have uploaded a preprint introducing
some example HGF response models that simultaneously model binary choices
and continuous RTs. You can check it out at
https://www.biorxiv.org/content/10.1101/2024.02.19.581001v1 and there are
also links to code and data included.
Cheers!
—
Reply to this email directly, view it on GitHub
<#248 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALSMZKOKJPLPEZ6AAUWAMHLYVBVDZAVCNFSM6AAAAAA5MP3JHOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRRGA3TINRUHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi HGF experts,
I am a newbie in coding and computational neuroscience, so please be patient with me ;D
I am using tapas_hgf_whatworld, tapas_logrt_linear_whatworld (edited, edits explained below), and tapas_quasinewton_optim to model the Bayesian inference parameters the reaction time from a pSRTT task (for experiment details see below, adapted from Marshall et al, 2016).
About half of the model outputs contain complex numbers. Here is a sample of the output:
Changes made to the response model logrt_linear_whatworld (attached).
1. I am using original RT (milliseconds/100) values instead of log RT.
RT in milliseconds is divided by 100 to scale the values of the reaction time down closer to the scale of the Bayesian parameters which are around the scan of 10 to the power of 1 to -2
2. I changed the response variables to RT~ be0 + be1.prediction error at level 1 + be2.precision weights at level 2 + be3.precision weighted prediction error at level 3, and be4.mean of level 3 + be5.post error slowing + ze.
There are two things I couldn't understand about the original script,
1. Why were the trajectories not used directly in the response model instead of the infStates?
I assumed that Marshall et al, 2016 used the linear_logrt_whatworld scripts, however, the variables are different from what is reported in Figure 3 of the paper. From Figure, I assumed that the variable of interest is first-level PE, third-level precision weighted prediction errors, and third-level mean and post error slowing. These perceptual parameters can already be found in the traj variable. I tried to use actual traj variables (attached as whatworld3 family of scripts) and equations from mathys et al. 2011;14 to derive the variables I needed (attached as whatworld2 family of scripts) and both produced complex numbers.
From my understanding, the first equation in linear_logrt_whatworld is calculating Shannon's surprise, i.e. -log(probability seeing that stimulus), but I don't quite understand the other two equations. which brings me to my next question
2. Why were the calculated response variables "transformed to 1st level" and how are they transformed to the first level?
I am well aware that my edits could have caused the weird problems, and I have spent quite some time trying to troubleshoot myself to no avail... Thank you for all the patience and understanding and help!
Vae
tapas_rt_linear_whatworld2.txt
tapas_rt_linear_whatworld3.txt
tapas_hgf_whatworld2.txt
tapas_hgf_whatworld3.txt
tapas_hgf_whatworld3_config.txt
tapas_rt_linear_whatworld3_config.txt
tapas_rt_linear_whatworld2_config.txt
tapas_hgf_whatworld2_config.txt
I also turned on verbose (in tapas_quasinewton_optim_config) for one of the subjects just to see what is going on and here is the output if it helps. After about 9 iterations the improvements were really very small (less than 1), I am not sure why the program is still optimizing.
The text was updated successfully, but these errors were encountered: