Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent submission without running tests. #4936

Closed
Tom01098 opened this issue Jul 15, 2019 · 11 comments
Closed

Prevent submission without running tests. #4936

Tom01098 opened this issue Jul 15, 2019 · 11 comments

Comments

@Tom01098
Copy link

I've seen quite a few students submit their first solution on the C# track using something like Console.ReadLine to get the name for the TwoFer exercise. Believing they have solved the exercise, they submit it.

I understand that having a language-specific test runner on the server-side would be a massive task, so instead I'm proposing that the CLI outputs a message similar to the following upon the first submission:

Do all your tests pass for this solution? You can run the tests using language-specific testing method.

@mfeif
Copy link

mfeif commented Jul 17, 2019

Could the exercism CLI be enhanced to run tests?

There's enough data in a typical command to deduce stuff:

exercism download --exercise=bank-account --track=python

That could be:

exercism test --exercise=bank-account --track=python

The first word on the command line could tell exercism to shell out to "track" which would run a defined script that would run the tests in $EXERCISM/$TRACK$/{exercise}...

This could also be forced (or recommended) for submit... I know in the beginnning we might not enforce that, but surely we should later on. I just started mentoring today, and 3 of my first 5 "customers" didn't use the tests.

@NobbZ
Copy link
Member

NobbZ commented Jul 18, 2019

Running such a command is not easily possible across operating system boundaries, also there are other problems.

  1. The command is different for each track, so we need to have a way to tell the CLI which track requires which command.
  2. Unifying into a single script is hard, as on windows no bash is available and on windows no interpreter for bat or ps1 files.
  3. Command might even be different for each exercise.
  4. Changing structure of exercises (eg. switching from pytest to footest) would require an update of the CLI
    4.1. This will cause incompatibilities when students still have an exercise using the old testformat and make them unable to submit.

What I have considered though, was something like exercism submit --test-run="rebar3 eunit" src/hello_world.erl and the CLI would call rebar3 eunit before submission and attach the output of the testrunner as well as its exit code to the submission, asking the student if they are sure to submit on failure.

This came up only recently during offline period and you reminded me, I hope I remember to file an issue today with more thought through wording.

@Tom01098
Copy link
Author

What I have considered though, was something like exercism submit --test-run="rebar3 eunit" src/hello_world.erl and the CLI would call rebar3 eunit before submission and attach the output of the testrunner as well as its exit code to the submission, asking the student if they are sure to submit on failure.

That sounds like a really good idea, would that be language independent (ie: for .NET languages, use dotnet test)?

It must still be allowed to submit without passing all the tests, or have a way of requesting mentoring when the solution is incomplete, because some students hit a mental block with the problem itself rather than the code style.

@NobbZ
Copy link
Member

NobbZ commented Jul 18, 2019

For dotnet languages it would look like exercism submit --test-run="dotnet test" hello-world.cs (or whatever the name of the implementation is).

And of course, submission without passing test is part of my WIP proposal, but a lot more explicit than before.

I'm in the process of writing out the proposal. I will comment and link here, once I have posted it.

@NobbZ
Copy link
Member

NobbZ commented Jul 18, 2019

Created proposal: #4941

@mfeif
Copy link

mfeif commented Jul 18, 2019

I'm not sure it should be so hard... sure, trying to imagine how to support tests on 40 languages right here in this thread is hard, but lets remember that the person has installed enough to work on the exercises and there's a directory setup on their box with env variables pointing to the right place(s). If a person running python exercises doesn't have python installed, they have worse problems than test discovery.

It should be not-to-hard to have a "run_test" script somewhere in $EXERCSISM_DIR/$TRACK/.bin ... and if they're say working on python, then python is likely to be installed. In the case of work I've seen, the test file itself could just be executable.

As far as "each exercise could have different tests" thing, well, there is still a .exercism/ directory in each exercise I've seen so far. That could still contain an executable (almost one-liner) script to run the test file(s) in that exercise.

(also, I know keeping parity across tracks is a goal, but there are already different numbers of exercises, and different languages have different idioms and best practices, so I'm not sure that will be possible. Better to do the "best" thing for each than to do none, no?)

Updating from pytest to footest would be no harder than updating the {exercise}_test files in the initial download.

There is already a command line thing that works across OS boundaries, so to each it to look for a command and run it in a process can't be so bad?

I like the proposal, and it's an improvement for mentors... but is somewhat different from what I'm theorizing.

@NobbZ
Copy link
Member

NobbZ commented Jul 19, 2019

Updating from pytest to footest would be no harder than updating the {exercise}_test files in the initial download.

But how would the CLI know if the currently downloaded version is footest or still pytest? How would you communicate this to the CLI at all? It needs to be updated. Katrina can't bundle a release everytime we change something on the track, especially not in a timely mannor. Also students are not necessarily able to update the CLI themselfs (corporate or university computers, third party packages (like for Arch Linux)).

@mfeif
Copy link

mfeif commented Jul 19, 2019

The CLI doesn't have to know that. It only needs to know that inside of each directory for an exercise, in the .exercism folder, there is a script called "run_tests". Or even there could be a json key in the files in there that tells CLI what process to launch. I should think that there could always be a one-liner that would be able to run the tests in an exercise, and the stdout could be captured and/or output.

It can be updated and configured exactly as often as the tests and exercises themselves. If the exercises move from one test framework to another, the metadata in the exercise would change too. Heck, in my track (python) the tests and the instructions to run the tests are already out of phase (the tests are written for unittest, not for pytest, but the instructions suggest the latter) and it all works out fine.

I'm not familiar with all the languages (obviously) but in the case of python, running the test_foo.py file from each exercise is itself all that is necessary to run the tests: without pytest, just "python test_foo.py" Perhaps that would work for many languages?

@NobbZ
Copy link
Member

NobbZ commented Jul 19, 2019

Okay, I somehow missed that the folder gets some additional things.

What language do you want to write the script in? How do you plan to make it compatible with linux, mac and windows?

@iHiD
Copy link
Member

iHiD commented Jul 23, 2019

To cross-post from #4941:

We're planning to do server-side running of tests in August. We could then feedback to the user that the tests have failed and they should try again.

When we do this, would we still need this local test running?

@sshine
Copy link

sshine commented Sep 24, 2019

Since #4941 was closed on Sep 19, 2019 cf. this comment by @iHiD in favor of running the test suite server-side, this feature request can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants