Skip to content

shmercer/writeAlizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

writeAlizer Logo

writeAlizer: An R Package to Generate Automated Writing Quality and Curriculum-Based Measurement (CBM) Scores

This repository hosts code for an R package to apply research-based writing scoring models (see references below). In addition, this repository hosts documentation as an electronic supplement to published research articles in the repository wiki.

The writeAlizer R package (a) imports ReaderBench, Coh-Metrix, and GAMET output files into R, (b) downloads existing predictive scoring models to the local machine, and (c) uses the predictive scoring models to generate predicted writing quality scores or Correct Word Sequences and Correct Minus Incorrect Word Sequences scores from the ReaderBench, Coh-Metrix, and/or GAMET files.

Versions

The version history of writeAlizer is available in the package NEWS.md file.

Getting Started

Prerequisites

writeAlizer accepts the following output files as inputs:

  1. ReaderBench: writeAlizer supports output files (.csv format) generated from the standalone version of ReaderBench that can be downloaded from here.
  2. Coh-Metrix: writeAlizer supports output files from Coh-Metrix version 3.0 (.csv format).
  3. GAMET: writeAlizer supports output files from GAMET version 1.0 (.csv format).

The writeAlizer scoring models assume that column names in the output files have been unchanged (exactly the same as generated from the program). For programs that list file paths in the first column, the writeAlizer file import functions will parse the file names from the file paths and store the file names as an identification variable (ID). File names/ID variables need to be numeric.

Installing

writeAlizer is not available on CRAN. To install writeAlizer in R, first make sure that the package devtools is installed in R

install.packages("devtools")

With devtools installed, you can install writeAlizer in R directly from this Github repository.

devtools::install_github("shmercer/writeAlizer")

After installation, documentation of the file import and predict_quality() functions, and examples of their use, can be found in the R package help file.

help("writeAlizer")

Documentation

Information on the various scoring models available and how they were developed is in this respository's wiki:

  1. Description of the general process used to develop scoring algorithms.
  2. Description of the following specific scoring models (models recommended for use in research are indicated by *), including information on the relative importance of metrics and weighting of algorithms:

Package Author and Maintainer

Also see the list of code contributors for this package.

writeAlizer Logo Wear

For writeAlizer t-shirts, hats, coffee mugs, etc., visit https://www.zazzle.ca/store/writealizer (Canada) or https://www.zazzle.com/store/writealizer (USA). Additional countries can be selected by following either link.

References

Journal Articles

Matta, M., Mercer, S. H., & Keller-Margulis, M. A. (2022). Evaluating validity and bias for hand-calculated and automated written expression curriculum-based measurement scores. Assessment in Education: Principles, Policy & Practice, 29, 200-218. https://doi.org/10.1080/0969594X.2022.2043240

Mercer, S. H., & Cannon, J. E. (2022). Validity of automated learning progress assessment in English written expression for students with learning difficulties. Journal for Educational Research Online, 14, 39-60. https://doi.org/10.31244/jero.2022.01.03 link to pre-print of accepted article

Matta, M., Keller-Margulis, M. A., & Mercer, S. H. (2022). Cost analysis and cost effectiveness of hand-scored and automated approaches to writing screening. Journal of School Psychology, 92, 80-95. https://doi.org/10.1016/j.jsp.2022.03.003 link to pre-print of accepted article

Keller-Margulis, M. A., Mercer, S. H., & Matta, M. (2021). Validity of automated text evaluation tools for written-expression curriculum-based measurement: A comparison study. Reading and Writing: An Interdisciplinary Journal, 34, 2461-2480. https://doi.org/10.1007/s11145-021-10153-6
link to pre-print of accepted article

Mercer, S. H., Cannon, J. E., Squires, B., Guo, Y., & Pinco, E. (2021). Accuracy of automated written expression curriculum-based measurement scoring. Canadian Journal of School Psychology, 36, 304-317. https://doi.org/10.1177/0829573520987753 link to pre-print of accepted article

Mercer, S. H., Keller-Margulis, M. A., Faith, E. L., Reid, E. K., & Ochs, S. (2019). The potential for automated text evaluation to improve the technical adequacy of written expression curriculum-based measurement. Learning Disability Quarterly, 42, 117-128. https://doi.org/10.1177/0731948718803296

Conference Presentations

Mercer, S. H.,Geres-Smith, R., Guo, Y., & Squires, B. (2023, February). Validity of automated learning progress assessment in written expression. Poster presented at the meeting of the National Association of School Psychologists, Denver, CO, USA. https://doi.org/10.17605/OSF.IO/WHJD3

Matta, M., Keller-Margulis M., & Mercer, S. H. (2022, February). New directions for writing assessment: Improving feasibility with automated scoring. Presentation at the meeting of the National Association of School Psychologists, Boston, MA, USA.

Matta, M., Keller-Margulis, M., & Mercer, S. H. (2021, July). The use of automated approaches to scoring written expression of elementary students. Poster presented at the at the meeting of the International School Psychology Association, online.

Matta, Michael, Keller-Margulis, M. A., Mercer, S. H., & Zopatti, K. (2021, February). Improving written-expression curriculum-based measurement feasibility with automated text evaluation programs. Paper presented at the meeting of the National Association of School Psychologists, online.

Mercer, S. H., Keller-Margulis, M. A., & Matta, M. (2020, February). Validity of automated vs. hand-scored written expression curriculum-based measurement samples. Poster presented at the Pacific Coast Research Conference, Coronado, CA, USA.

Mercer, S. H., & Cannon, J. E. (2020, February). Monitoring the written expression gains of learners during intensive writing intervention. Poster presented at the Pacific Coast Research Conference, Coronado, CA, USA.

Keller-Margulis, M. A., & Mercer, S. H. (2019, August). Validity of automated scoring for written expression curriculum-based measurement. Poster presented at the meeting of the American Psychological Association, Chicago, IL, USA.

Mercer, S. H., Tsiriotakis, I., Kwon, E., & Cannon, J. E. (2019, June). Evaluating elementary students' response to intervention in written expression. Paper presented at the meeting of the Canadian Association for Educational Psychology (Canadian Society of the Study of Education), Vancouver, BC, Canada.

License

This project is licensed under the GNU General Public License Version 3 (GPLv3).

Acknowledgments

  • The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A190100. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education. Principal Investigator: Milena Keller-Margulis (University of Houston). Co-Principal Investigator: Sterett Mercer (University of British Columbia). Co-Principal Investigator: Jorge Gonzalez (University of Houston). Co-Investigator: Bruno Zumbo (University of British Columbia).
  • This work was supported by a Partnership Development Grant (Assessment for Effective Intervention in Written Expression for Students with Learning Disabilities) from the Social Sciences and Humanities Research Council of Canada. Principal Investigator: Sterett Mercer (University of British Columbia). Co-Investigators: Joanna Cannon (UBC) and Kate Raven (Learning Disabilities Society of Greater Vancouver).

About

R Package for Automated Writing Quality Scores

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages