-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent test results when using different runners #28
Comments
I should also note that Python standard convention of
|
Hmm, this is curious. I agree that all methods should result in the same total amount of test cases. |
Okay, so the first method The pytest and tox methods actually discover both 2010 test cases, but do not run them all. This is depending on environment variables, which are set properly in the tox.ini file. I propose to document this behavior. The last method, via setup.py, is a deprecated method, I propose to remove it. |
I updated the docs as per the above comment: https://ppci.readthedocs.io/en/latest/development.html#running-the-testsuite |
Thanks for looking into this, and detailed analysis.
To clarify, deprecated by whom/what? You see, "python setup.py test" is the standard, generalized way to test a Python complication. It allows to abstract away a particular runner some specific project may use behind a common interface. I'd recommend to support it, with proper Beyond that, what can I say - you're the author of the testsuite, so you would know how to do it best. Choosing one test method makes a good sense. If you still want my 2 cents on that, then I find testing to be rather boring area :-P. My favorite testing tool is nosetests, where I just write Python functions with assert's, voila. And that's when I need to test API stuff, but I generally try to lean on the side of integration tests on the "command line app" level, e.g. for PPCI that would be: input C code, expected IR output, a shell driver to execute command of compiling C to IR, and diffing results. That's as close as possible to the way people actually use the stuff. Back to unit testing, tox is the least familiar tool for me, I always considered it too eerie and complicated ;-). But yeah, I definitely looking forward to learn new things from dealing with PPCI ;-). |
The main testing tool used is pytest, tox is a sort of wrapper around venv and pytest.
When I run this: $ python setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
running egg_info
writing ppci.egg-info/PKG-INFO
writing dependency_links to ppci.egg-info/dependency_links.txt
... I guess it is deprecated from setuptools? Unittests are useful in this project, there is simply too much stuff going on behind some highlevel api calls, it make sense to test subsystems. Btw, there are a whole bunch of sample snippets with C code and corresponding output on stdout, this is what you mean by integration testing, having a C sourcecode and diff it's output with the expected output. |
I found here some information about setuptools: https://setuptools.readthedocs.io/en/latest/setuptools.html#test-build-package-and-run-a-unittest-suite Looks like that feature is being deprecated. |
Docs at https://ppci.readthedocs.io/en/latest/development.html#running-the-testsuite present three ways to run the testsuite, and the way it's worded there, one can only imagine that all they equivalent. However, trying them results in different number of tests run:
python -m unittest discover -s test
. This would be a default way, as it uses builtin Python module. But:python -m pytest -v test/
. Quite different result:tox -e py3
. This gives the biggest coverage:Would be nice to know the reason for discrepancies and do something about them (ideally, make them all run the same amount of tests, vs leaving only 1 way to run them ;-) ).
The text was updated successfully, but these errors were encountered: