Skip to content

Official implementation of "Automated Grading of Students’ Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models" (arXiv:2507.03056) — a framework for automated assessment of handwritten graphs and generating feedback using meta-learning and VLLMs.

License

bpfrd/handwritten-graph-grading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Handwritten Graph Autograder

License: CC BY-NC 4.0

Overview

This repository implements the methods described in the paper "Automated Grading of Students’ Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models" by Behnam et al. The project focuses on auto-grading of students' handwritten graphs using two approaches:

  • Meta-Learning Models: Algorithms such as Prototypical Networks, MAML (Model-Agnostic Meta-Learning), and Relation Networks trained specifically for the task of auto-grading handwritten graphs.
  • Vision-Large Language Models (VLLMs): Pre-trained models used in an in-context few-shot learning scenario.

The study compares the performance of these two approaches on a real-world dataset, highlighting their strengths and limitations.

Installation

Clone the repository

git clone https://github.com/bpfrd/handwritten-graph-grading.git
cd handwritten-graph-grading

Set up virtual environment

python3 -m venv venv
source venv/bin/activate    # Linux / macOS
venv\Scripts\activate       # Windows

Environment Configuration

Create a .env file in the project root:

DATA_PATH=~/datasets/handwritten_graphs/all_graphs.json
CHECKPOINTS_DIR=./checkpoints
NUM_THREADS=4

The .env file is excluded from version control and stores sensitive paths.

Install dependencies

pip install -r requirements.txt

Repository Structure

│
├── scripts/
│   ├── main.py                 # Main training script
│   ├── run_experiments.sh      # Bash script to run multiple configurations
│
├── src/
│   ├── models.py               # code containing prototypical network, maml, proto maml, matching network, and relation network models
|   ├── explainability.py       # GradCAM, GuidedBackprop, SmoothGrad, etc.
|   ├── my_dataloader.py        # Custom dataloader for sampling tasks in meta-learning
|   ├── utils.py                # Other utils
│
├── .env                        # Stores sensitive config (paths, keys)
├── requirements.txt
├── README.md
└── LICENSE

Running Experiments

Single Run

To train a single experiment:

cd scripts
python3 main.py --experiment_id 0 --n_way 3 --k_shot 2 --num_epochs 50

Optional flags:

--include_text True / --include_image True
--load_model True              # resume training
--checkpoints_dir ./checkpoints
--data_path ~/datasets/.../data.json

Multiple Configurations (Batch Experiments)

Run all predefined experiments sequentially:

bash scripts/run_experiments.sh

Each run will log its output in the logs/ directory, e.g.:

logs/run_nway3_kshot1.log
logs/run_nway3_kshot2.log

License

This code is released under the Creative Commons Attribution–NonCommercial 4.0 International (CC BY-NC 4.0) license. You may use it for research or educational purposes. Commercial use is prohibited without explicit permission.

© 2025 Parsaeifard et al.

Citation

If you use this code in your research, please cite our paper:

@article{parsaeifard2025automated,
  title={Automated Grading of Students’ Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models},
  author={Parsaeifard, Behnam et al.},
  journal={arXiv preprint arXiv:2507.03056},
  year={2025}
}

About

Official implementation of "Automated Grading of Students’ Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models" (arXiv:2507.03056) — a framework for automated assessment of handwritten graphs and generating feedback using meta-learning and VLLMs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published