Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redesign evaluator #2004

Open
luoyanjie opened this issue Feb 7, 2017 · 2 comments
Open

Redesign evaluator #2004

luoyanjie opened this issue Feb 7, 2017 · 2 comments

Comments

@luoyanjie
Copy link

luoyanjie commented Feb 7, 2017

No description provided.

@fonglh
Copy link
Member

fonglh commented Feb 8, 2017

Key considerations

  1. Performance. Possible due to poll waiting, each evaluation takes a minimum of 7 seconds or so.
    This should be reduced to 3 seconds, preferably faster.
  2. Test independence. A crash (killed due to limits, bad code etc) on ANY of the test cases means no test report file, so all the test cases look like they've failed. This is misleading to both the student and the TAs.

1 possible solution is to discover all the test cases before hand, then run them 1 by 1. Overhead needs to be benchmarked.

@fonglh
Copy link
Member

fonglh commented Oct 24, 2017

Polling was eliminated in #2308 and test types are now run independently (#2596, #2603, #2593, along with changes to the evaluator script in the images (Coursemology/evaluator-images#18))

Test case dependence is a necessary teaching aid, and running the test cases 1 by 1 is likely to lead to large overhead because of the cost of invoking and booting the test framework.

Solution would be to run by test group, then run independently only if the test group has a fatal crash. Show a warning to the user that the tests were run independently, in case the instructor relies on dependent tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants