Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obtaining assessment data (pass/fail/trials) #14

Closed
coatless opened this issue Feb 1, 2017 · 4 comments · Fixed by #562
Closed

Obtaining assessment data (pass/fail/trials) #14

coatless opened this issue Feb 1, 2017 · 4 comments · Fixed by #562

Comments

@coatless
Copy link
Contributor

coatless commented Feb 1, 2017

Glancing over the documentation, the event recorders only supply data presently for submission, error, or hint. Is there any way to obtain more granularity on this such that the outcome of the submission could be discriminate as being either correct or incorrect?

If so, then potentially the responses could be aggregated in a database and used to effectively assess students under a Shiny Pro Server setup assuming login was required. In essence, this has the potential to be the nbgrader feature of the Jupyter notebooks but for R.

@jjallaire
Copy link
Member

Yes, what you are describing is definitely a scenario want to support well. If you make use of an Exercise Checker (https://rstudio.github.io/tutor/exercises.html#exercise_checking) then the results of the check are included in the call to the event recorder.

@dtkaplan
Copy link

dtkaplan commented Feb 1, 2017

#coatless One prototype exercise checker is the checkr package, available via

devtools::install_github("dtkaplan/checkr")

There's a vignette with the package that gives a quick overview and examples.

The newest version of checkr contains an (optional) login/verification component, so that you can have a login even you you are running the tutorial on a generic shiny server.

Shinyapps.io is great, but you need the "standard" account level which at $99/mo would cause many instructors pain. The login component I implemented uses Google spreadsheets, so just about any instructor can set up and maintain the accounts and access the submissions.

I wrote checkr with a focus on checking R code, but this can be expanded to cover other modes of submission, for example, entering equations or fill-in-the-blanks. I'd be interested to hear what modes you would be interested in having checked.

@coatless
Copy link
Contributor Author

@jjallaire: Excellent! I'll try to throw something together.

@dtkaplan: Thanks for the information, I'll definitely look into checkr. From the exercise-template vignette, it seems like each problem could be isolate to its own .Rmd file and then be combined so that each student could have a "random" ordering of questions?

Presently, I'm able to use an education-licensed Shiny Server Pro. So, I'll likely study how results are sent to google spreadsheets and perhaps make a modification to stream the results into a database.

Regarding your assessment types question, I'm looking at allowing wide berth of assessments from allowing students to:

  • symbolic equations (input fields transformed with mathquill)
  • matching
  • explicitly mark points on a graph (think identify(x, y))
  • capturing different student models

The last of which I've worked on a bit extensively on over the past year or so (https://github.com/coatless/stat429-fa15-autograder).

@dtkaplan
Copy link

@coatless Putting each exercise or question into it's own Rmd is working very well for me. I do this using the knitr child system. Among other things, it let's me debug individual problems (which relates mainly to getting checkr to give a useful response to different kinds of errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants