- go to the same folder as the
makefile
- download the group JSON submissions from Canvas and save them in
group_JSON_submissions
- download the evaluation JSON submissions from Canvas and save them in
evaluation_JSON_submissions
- in the same directory, run
make
- you can simply
make
again after any update to the submissions files - output can be found in
output
directory (email_output.txt
should contain most of the vital information; check the invalid JSON files for submissions that did not get evaluated because of the invalid JSON)
Uncomment the lines involving diff_message
in DiplomacyVetting.py
for easier grading (it records the difference in output if there is any, which helps pinpoint the bugs in student code). Also see CommonMistakes.md
for comments that helps with grading.
Make sure to read TODO.md
for features that are yet implemented; some quirks of the script can be explained by the lack of certain features
modified from cs330e-collatz-grading-script:
https://github.com/peter1357908/cs330e-collatz-grading-script/
- no longer checks for optional files (Collatz had a sphere challenge optional file)
- checks for Git flow files like
.gitignore
and.gitlab-ci.yml
. - commented out some print statements
- checks for group number (a new field added to the JSON file at the time of this project)
- account for multiple acceptance test files
- overall more modularization (easier to adapt to different projects)
- hard-checks the number of unit tests (3 for this project)
- uses
subprocess.run()
where possible (replacing thecall()
,check_call()
,check_output()
, andpopen()
calls) - added a simple progress tracker
- silenced a lot of commands by directing output to
subprocess.DEVNULL
as well as silencing makefile commands with@