Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help prevent reverse engineering of tests #17

Open
habermanUIUC opened this issue Feb 18, 2020 · 3 comments
Open

Help prevent reverse engineering of tests #17

habermanUIUC opened this issue Feb 18, 2020 · 3 comments

Comments

@habermanUIUC
Copy link

One of the issues with Python testing, is that it's easy for students to read directories and figure out what files to potentially read. Although one could argue that a student who has figured this out probably should deserve to pass the tests. However, this kind of information then spreads.

There's many ways we mitigate this but all of them involve a lot of jumping through hoops (dynamically renaming the tests, removing the test files after they are loaded but before the students code is loaded), etc (which only work if in a sandboxed environment).

A simple and effective way to prevent this would be to allow the initialization of JSONTestRunner to take a parameter(s) to limit how much stdout and stderr is written out in the results. That's the most likely way for students to get information back. If both stdout and stderr are truncated to a specified maximum amount, then it allows both the framework to print relevant messages, but also keeps students from reading the test files.

Another option would be allow tests cases to write to a logfile that is then sent back to stdout first and anything else truncated.

@ibrahima
Copy link
Contributor

Hi Mike! That's a valid concern, thanks for bringing it up! Note that if you pass in buffer=False to JSONTestRunner print statements won't be included in the autograder output, but I can't remember off the top of my head whether that leaves you the test author with any way of adding test output.

I do like your suggestions though, we'lll keep that in mind for the future!

@habermanUIUC
Copy link
Author

habermanUIUC commented Feb 18, 2020

Thanks for that.

  1. Setting buffer to None, does indeed keep stdout clean. However this has an issue (which could be a bug) that if buffer is None AND the test fails, the assert message throws another exception because it's attempting to write to 'None'.

  2. I think this is indeed an interesting situation to handle. Right now we do the following (which works fine, but we always need to be mindful of it):

import meh_utils
meh_utils.remove_test(__file__)

import solution as student  # now it's safe to do so

  1. Somehow configuring the auto grader to manage what's being logged by the tests vs being printed by anything else would be bit cleaner (and safer to test)

Thanks again for considering a viable option.

@jhcole
Copy link

jhcole commented Nov 4, 2021

This isn't a complete solution, but it might be helpful to use unittest.mock to nerf potentially dangerous functions. E.g.

log = StringIO()
with patch("sys.stdout", new=log):
    # call submitted functions
# do some tests on log

If they don't need stdout at all then just you can just fail the test. This technique could also be used to wrap open with a function that only allows whitelisted files to be read or written.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants