Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data Simulation Functionality #647

Closed
Brendan-Schuetze opened this issue Jul 30, 2019 · 9 comments
Closed

Data Simulation Functionality #647

Brendan-Schuetze opened this issue Jul 30, 2019 · 9 comments

Comments

@Brendan-Schuetze
Copy link

Brendan-Schuetze commented Jul 30, 2019

In my experience, subtle errors in my experiments are often best found through a careful examination of participant datasets, however, in order to generate artificial test data from jsPsych experiments, one has to run through the experiment manually.

Such a process often results in missing errors in the counterbalancing and trial ordering of experiments, as these variables can only be assessed through the evaluation of many participant's datasets. One dataset will not reveal to an experimenter whether the trials are being properly counterbalanced etc...

I envision a system wherein each plugin has an associated manifest, documenting the expected ranges of values (and perhaps even distributions) of each variable being saved per trial (e.g., RT is a numeric between 0 and 5000 ms; likert scales always range between 1 and 7). Then there would be a simulate function that would progress through the experiment rapidly and automatically, saving dataset in the same format as real participant data.

Such a feature would also allow for the rapid prototyping and pre-registration of analysis plans.

Is this already possible in jsPsych? If not, would there be interest in developing such a system? I may be willing and able to contribute to such an effort.

@jodeleeuw
Copy link
Member

I think this would be great. There's been some uncoordinated discussion of a "debug mode" before (#358) and this would be a logical option within that mode. I'm very open to any contributions along this line.

@CrumpLab
Copy link

Just came here to say something along these lines would be fantastic. Hoping to find some time to contribute if I can.

@jodeleeuw
Copy link
Member

I can think of at least two ways to implement this. I'm sure there are others, but I thought I'd quickly describe these two approaches in the interest of jump starting work. Both approaches are similar to what @Brendan-Schuetze described.

Option 1: Expand plugin.info data

Currently plugin.info contains metadata about the parameters, but no information about the expected data. We could expand this for each plugin. For example, the html-keyboard-response plugin might look like this:

plugin.info = {
    name: 'html-keyboard-response',
    description: '',
    parameters: { ... },
    data: {
      rt: {
        type: jsPsych.plugins.dataType.FLOAT,
        default: function() { return 200 + Math.random()*800 }
      },
      key_press: {
        type: jsPsych.plugins.dataType.KEY,
        default: function() { return jsPsych.randomization.sampleWithoutReplacement([80, 81, 82], 1)[0]; },
      }
    }
}

A couple issues come to mind with this approach:

  1. It's hard to specify a default value that makes sense for every experiment. So we would probably need a way to either make the defaults sensitive to the trial parameters (e.g., restrict the default for key_press to valid choices).
  2. It's not clear to me exactly what the right way to use this information is. We still would need some kind of simulation method that runs through the experiment.

Option 2: plugin.simulate method

We could add a plugin.simulate method to every plugin. The method would specify the interactions that the user would normally do for the trial. For html-keyboard-response it might look like this:

plugin.simulate = function(trial) {
   if(trial.choices !== jsPsych.NO_KEYS) {
     var key_response = jsPsych.randomization.sampleWithoutReplacement(trial.choices, 1)[0];
     setTimeout(function(){
       document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: key_response}));
       document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: key_response}));
      }, 250);
  }
}

This approach is pretty flexible. It could even be parameterized by the experimenter, e.g., replacing the 250 with a parameter like trial.simulate.response_time.

A downside is that it's a real-time simulation. It would be automated, but slow. There might be a way to implement fake timers so that any timed events are executed immediately instead of in real time, but that would complicate implementation.

@jodeleeuw jodeleeuw added this to the 7.0 milestone Aug 15, 2019
@CrumpLab
Copy link

As an option to consider before adding full simulation of possible responses, it would be worth having null values as place holders inserted into the data object. For example, a use case for me would be to pull in the data to R, then use the factor/trial structure represented in the data to guide simulation of results done in R (or language of choice).

@CrumpLab
Copy link

Option 1 and 2 look like really good starting points to me.

Also, I realized I could just the trial duration to 0 globally in jsPsych.init:

on_trial_start: function(trial) {
    trial.trial_duration = 0;
  }

and that rapidly runs the experiment and produces a data object (without simulated data, but with placeholders).

Could work with option 1: the default values produce simulated results, and the simulation is run quickly because trial_duration is 0 for all trials.

@jodeleeuw
Copy link
Member

Great point @CrumpLab. I think most, but not all plugins currently support trial_duration. It might be worth making that parameter universal to make a debug mode easier.

@fohria
Copy link

fohria commented Feb 10, 2020

I'm very interested in something like this both for debugging and for enabling simulated/artificial participants. My research involves computational modelling of human behaviour, so I've used jsPsych for the experiments with human participants and then coded the same experiment in Python to use for my artificial participants.

So something like option 2 combined with the possibility of exposing the stimuli and response option would be awesome, as I could then use the same experiment code for humans and artificial participants. I've thought of creating a wrapper of jsPsych for openai gym but haven't had the time to look into how that would work.

Then again what I'd like might be too complicated to implement and should be made separately from a debug/simulation mode?

@nikbpetrov
Copy link
Contributor

@jodeleeuw 's Option 2 of a simulate method is implemented -- #1886

@jodeleeuw jodeleeuw added this to To do in MOSS milestone 4 Oct 4, 2021
@jodeleeuw
Copy link
Member

See #2287

@jodeleeuw jodeleeuw moved this from To do to Done in MOSS milestone 4 Nov 27, 2021
@jodeleeuw jodeleeuw removed this from the 7.2 milestone Dec 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

5 participants