Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

general overview #3

Open
2 of 3 tasks
lucaferranti opened this issue Nov 12, 2022 · 5 comments
Open
2 of 3 tasks

general overview #3

lucaferranti opened this issue Nov 12, 2022 · 5 comments

Comments

@lucaferranti
Copy link
Member

lucaferranti commented Nov 12, 2022

This is the next (and last) big thing to do before getting the track released. Time to get serious with it.

In this issue, I summarize my understanding of the guidelines, to make sure I understand what I have to do and try to sketch a concrete action plan.

Overall, I think this should be easier than it looks, because we should have almost all pieces ready, namely

  • we already have a bash script to test exercises
  • we have a docker image we use in the CI.

I understand there are three levels of ambition:

  • LEVEL 1: A single grade for the whole exercise, pass if everything is fine and fail otherwise
  • LEVEL 2: info for individual tests of the exercise.
  • LEVEL 3: let's think about this in the future, sometimes you gotta choose your battles

So the main challenge is to implement the logic to generate the JSON.

My rough roadmap:

  • implement level 1: this should be fairly straightforward, run mason test and use the resulting exit code to choose what to write in the JSON. Could be done as a small bash script
  • Make it work in docker.
  • Implement level 2: there are a few approaches. For example, capture the output of mason test --show and parse it to get the result of each individual test. This can be done with a quick and dirty script. Alternative add a method to Test class that generates the JSON while running the tests, requires some work but probably not too much.

Comments? Suggestions?

@lucaferranti
Copy link
Member Author

does the exercism platform redirect automatically display the output to the student. the output of mason test --show is already fairly informative, so setting for level 1 + give students mason output could be a good start.

@lucaferranti
Copy link
Member Author

I think my understanding of the submission workflow is insufficient. Is the following correct?

Student works locally

  • they get the folder, write the code and test "their own" as instructed, in our case with mason test. What I mean, is that no json is generated locally.
  • when they submit the exercise, is the whole folder or only the stub file sent?
  • When submitted, the exercise is run using the tester, the json is generated and used to display the result of the student.

Student works in online editor

  • like above, but only the last step. Here both testing and submitting is done with the runner.

@lucaferranti
Copy link
Member Author

lucaferranti commented Nov 12, 2022

@kytrinyx , do you have comments suggestions on my above messages? Any pointer on how to get started is very welcome :)

@kytrinyx
Copy link
Member

LEVEL 3: let's think about this in the future, sometimes you gotta choose your battles

Agreed.

when they submit the exercise, is the whole folder or only the stub file sent?

If I recall correctly, all the files that are listed in the config.json as being test and solution files get sent:
https://github.com/exercism/chapel/blob/414a3fc4c867bcef5cdfcd749490bfdc83ad8ecc/exercises/practice/dominoes/.meta/config.json#L6-L12

I think the rest of your assumptions there are correct, but I will defer to @ErikSchierboom, who is our resident expert. Erik, would you take a look at the above and chime in with any suggestions?

@ErikSchierboom
Copy link
Member

LEVEL 3: let's think about this in the future, sometimes you gotta choose your battles

Implementing level 3 only makes sense if you have implemented concept exercises.

If I recall correctly, all the files that are listed in the config.json as being test and solution files get sent:
https://github.com/exercism/chapel/blob/414a3fc4c867bcef5cdfcd749490bfdc83ad8ecc/exercises/practice/dominoes/.meta/config.json#L6-L12

Basically, everything that is in the exercise directory as found in the track repo is sent, but with the submitted files overwriting any existing files.
The test runner thus has access access to all files in the exercise directory + any submitted files.
Does that make sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants