-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent submission without running tests. #4936
Comments
Could the exercism CLI be enhanced to run tests? There's enough data in a typical command to deduce stuff:
That could be:
The first word on the command line could tell exercism to shell out to "track" which would run a defined script that would run the tests in $EXERCISM/$TRACK$/{exercise}... This could also be forced (or recommended) for submit... I know in the beginnning we might not enforce that, but surely we should later on. I just started mentoring today, and 3 of my first 5 "customers" didn't use the tests. |
Running such a command is not easily possible across operating system boundaries, also there are other problems.
What I have considered though, was something like This came up only recently during offline period and you reminded me, I hope I remember to file an issue today with more thought through wording. |
That sounds like a really good idea, would that be language independent (ie: for .NET languages, use It must still be allowed to submit without passing all the tests, or have a way of requesting mentoring when the solution is incomplete, because some students hit a mental block with the problem itself rather than the code style. |
For dotnet languages it would look like And of course, submission without passing test is part of my WIP proposal, but a lot more explicit than before. I'm in the process of writing out the proposal. I will comment and link here, once I have posted it. |
Created proposal: #4941 |
I'm not sure it should be so hard... sure, trying to imagine how to support tests on 40 languages right here in this thread is hard, but lets remember that the person has installed enough to work on the exercises and there's a directory setup on their box with env variables pointing to the right place(s). If a person running python exercises doesn't have python installed, they have worse problems than test discovery. It should be not-to-hard to have a "run_test" script somewhere in $EXERCSISM_DIR/$TRACK/.bin ... and if they're say working on python, then python is likely to be installed. In the case of work I've seen, the test file itself could just be executable. As far as "each exercise could have different tests" thing, well, there is still a .exercism/ directory in each exercise I've seen so far. That could still contain an executable (almost one-liner) script to run the test file(s) in that exercise. (also, I know keeping parity across tracks is a goal, but there are already different numbers of exercises, and different languages have different idioms and best practices, so I'm not sure that will be possible. Better to do the "best" thing for each than to do none, no?) Updating from pytest to footest would be no harder than updating the {exercise}_test files in the initial download. There is already a command line thing that works across OS boundaries, so to each it to look for a command and run it in a process can't be so bad? I like the proposal, and it's an improvement for mentors... but is somewhat different from what I'm theorizing. |
But how would the CLI know if the currently downloaded version is |
The CLI doesn't have to know that. It only needs to know that inside of each directory for an exercise, in the .exercism folder, there is a script called "run_tests". Or even there could be a json key in the files in there that tells CLI what process to launch. I should think that there could always be a one-liner that would be able to run the tests in an exercise, and the stdout could be captured and/or output. It can be updated and configured exactly as often as the tests and exercises themselves. If the exercises move from one test framework to another, the metadata in the exercise would change too. Heck, in my track (python) the tests and the instructions to run the tests are already out of phase (the tests are written for unittest, not for pytest, but the instructions suggest the latter) and it all works out fine. I'm not familiar with all the languages (obviously) but in the case of python, running the test_foo.py file from each exercise is itself all that is necessary to run the tests: without pytest, just "python test_foo.py" Perhaps that would work for many languages? |
Okay, I somehow missed that the folder gets some additional things. What language do you want to write the script in? How do you plan to make it compatible with linux, mac and windows? |
To cross-post from #4941:
When we do this, would we still need this local test running? |
Since #4941 was closed on Sep 19, 2019 cf. this comment by @iHiD in favor of running the test suite server-side, this feature request can be closed. |
I've seen quite a few students submit their first solution on the C# track using something like
Console.ReadLine
to get the name for the TwoFer exercise. Believing they have solved the exercise, they submit it.I understand that having a language-specific test runner on the server-side would be a massive task, so instead I'm proposing that the CLI outputs a message similar to the following upon the first submission:
The text was updated successfully, but these errors were encountered: