Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #42

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 22 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -7,24 +7,36 @@ Python programming courses using Jupyter or Colab notebooks.

## Who is this project for?

The main target audience is teaching staff who develops programming courses
The main target audience is teaching staff who develop programming or data science courses
using Jupyter notebooks. The tools provided by this project facilitate addition
of autogradable tests to programming assignments, automatic extraction of the
autograding tests and student versions of the notebooks, and easy deployment of
the autograding backend to the cloud.

The main focus is Japanese universities, so the example assignments provided
in the `exercises/` subdirectory are mostly in Japanese language.
Initially, the project was started in collaboration with Japanese universities, so
some of the example assignments provided in the `exercises/` subdirectory are in
the Japanese language. However, anyone using Jupyter notebooks (such as in Colab)
can benefit from this project.

## How to integrate autograder to your course
## How to integrate the assistant to your course

If you have a course based on Jupyter notebooks and want to integrate the
autochecking tests, there are multiple different way how the autochecking tests
can be run:
There are two main ways to use the assistant:

* Inside the student notebook (e.g. on Colab). The execution of autochecking
tests is handled within the same Python Runtime that student uses. Note that
this approach only supports self-checking, and cannot be used for _grading_
* Student-driven self-checking with feedback. In this mode, the learner runs the tests interactively
and receives immediate feedback about their code, instead of having to wait for an instructor or
teaching assistant to give them feedback. Although the tests themselves are not visible to the
learner, they can run them as many times as desried in order to learn the material, and the results
from test runs are not recorded.

* Grading. By running tests against submitted code and recording the results, the assistant can
also improve efficiency of instructor workload by automating the task of checking for specific
code syntax and functionality.

Depending your requirements for the above features, you can use either the "pure-Colab"
approach or the hosted approach.

* Running the autochecking tests inside within the same Python Runtime that student uses.
Note that this approach only supports self-checking, and cannot be used for _grading_
student work. See the details in [docs/colab.md](docs/colab.md).

* Hosted on Google Cloud Run. The scripts in this repository provide a server