New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data Simulation Functionality #647
Comments
I think this would be great. There's been some uncoordinated discussion of a "debug mode" before (#358) and this would be a logical option within that mode. I'm very open to any contributions along this line. |
Just came here to say something along these lines would be fantastic. Hoping to find some time to contribute if I can. |
I can think of at least two ways to implement this. I'm sure there are others, but I thought I'd quickly describe these two approaches in the interest of jump starting work. Both approaches are similar to what @Brendan-Schuetze described. Option 1: Expand plugin.info dataCurrently plugin.info = {
name: 'html-keyboard-response',
description: '',
parameters: { ... },
data: {
rt: {
type: jsPsych.plugins.dataType.FLOAT,
default: function() { return 200 + Math.random()*800 }
},
key_press: {
type: jsPsych.plugins.dataType.KEY,
default: function() { return jsPsych.randomization.sampleWithoutReplacement([80, 81, 82], 1)[0]; },
}
}
} A couple issues come to mind with this approach:
Option 2: plugin.simulate methodWe could add a plugin.simulate = function(trial) {
if(trial.choices !== jsPsych.NO_KEYS) {
var key_response = jsPsych.randomization.sampleWithoutReplacement(trial.choices, 1)[0];
setTimeout(function(){
document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: key_response}));
document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: key_response}));
}, 250);
}
} This approach is pretty flexible. It could even be parameterized by the experimenter, e.g., replacing the 250 with a parameter like A downside is that it's a real-time simulation. It would be automated, but slow. There might be a way to implement fake timers so that any timed events are executed immediately instead of in real time, but that would complicate implementation. |
As an option to consider before adding full simulation of possible responses, it would be worth having null values as place holders inserted into the data object. For example, a use case for me would be to pull in the data to R, then use the factor/trial structure represented in the data to guide simulation of results done in R (or language of choice). |
Option 1 and 2 look like really good starting points to me. Also, I realized I could just the trial duration to 0 globally in jsPsych.init:
and that rapidly runs the experiment and produces a data object (without simulated data, but with placeholders). Could work with option 1: the default values produce simulated results, and the simulation is run quickly because trial_duration is 0 for all trials. |
Great point @CrumpLab. I think most, but not all plugins currently support |
I'm very interested in something like this both for debugging and for enabling simulated/artificial participants. My research involves computational modelling of human behaviour, so I've used jsPsych for the experiments with human participants and then coded the same experiment in Python to use for my artificial participants. So something like option 2 combined with the possibility of exposing the stimuli and response option would be awesome, as I could then use the same experiment code for humans and artificial participants. I've thought of creating a wrapper of jsPsych for openai gym but haven't had the time to look into how that would work. Then again what I'd like might be too complicated to implement and should be made separately from a debug/simulation mode? |
@jodeleeuw 's Option 2 of a simulate method is implemented -- #1886 |
See #2287 |
In my experience, subtle errors in my experiments are often best found through a careful examination of participant datasets, however, in order to generate artificial test data from jsPsych experiments, one has to run through the experiment manually.
Such a process often results in missing errors in the counterbalancing and trial ordering of experiments, as these variables can only be assessed through the evaluation of many participant's datasets. One dataset will not reveal to an experimenter whether the trials are being properly counterbalanced etc...
I envision a system wherein each plugin has an associated manifest, documenting the expected ranges of values (and perhaps even distributions) of each variable being saved per trial (e.g., RT is a numeric between 0 and 5000 ms; likert scales always range between 1 and 7). Then there would be a simulate function that would progress through the experiment rapidly and automatically, saving dataset in the same format as real participant data.
Such a feature would also allow for the rapid prototyping and pre-registration of analysis plans.
Is this already possible in jsPsych? If not, would there be interest in developing such a system? I may be willing and able to contribute to such an effort.
The text was updated successfully, but these errors were encountered: