Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest not working correctly! #124

Closed
keyhan opened this issue Apr 15, 2016 · 9 comments
Closed

pytest not working correctly! #124

keyhan opened this issue Apr 15, 2016 · 9 comments

Comments

@keyhan
Copy link

keyhan commented Apr 15, 2016

Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

import os

from gabbi import driver

# By convention the YAML files are put in a directory named
# "gabbits" that is in the same directory as the Python test file. 
TESTS_DIR = 'gabbits'

def test_gabbits():
    test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
    test_generator = driver.py_test_generator(
        test_dir, host="http://www.nfdsfdsfdsf.se", port=80)

    for test in test_generator:
        yield test

The yaml-file looks very simple:

tests:
  - name: Do get to a faulty site
    url: /sdsdsad
    method: GET
    status: 200

The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

@cdent
Copy link
Owner

cdent commented Apr 15, 2016

Sigh, looks like I got that wrong. The tests do run but don't report failure correctly. Apparently in my enthusiasm over the right thing almost happening I failed to test test failures. I will fix that tomorrow (I hope). In the meantime if you're able to use a unittest-based runner that ought to work.

Another thing to note: where you use host="http://www.nfdsfdsfdsf.se" it needs to be host=www.nfdsfdsfdsf.se". It really does want just a host at that stage.

@keyhan
Copy link
Author

keyhan commented Apr 15, 2016

Thanks for a quick note, For us the pytest is important since it enables jenkins reporting :)

cdent added a commit that referenced this issue Apr 16, 2016
Fixes #124

Thought pytest was collecting and running tests, the results were
not actually being handled. If a test failed, it still appeared to
pass. The fundamental reason for this is that only the GabbiSuite
which contains all the tests from each YAML file was being checked.
These look like tests to pytest and pass.

The eventual fix is fairly complex and could maybe be made less so
by learning how to use modern parameterized pytest[1] rather than the
old yield style being used here. The fix includes:

* Creating a PyTestResult class which translates various unitest
  result types into pytest skips or xfails or reraises errors as
  required.

* Works around the GabbiSuite.run() based fixture handling that
  unitest-based runners automatically use but won't work properly
  in the pytest yield setting by adding start() and stop() methods
  the suite and yielding artifical tests which call start and stop.[2]

Besides getting failing tests to actually fail this also gets some
other features working:

* xfail and skip work, including skipping an entire yaml file with
  he SkipFixture
* If a single test from a file is selected, all the prior tests in
  the file will run too, but only the one requested will report.

[1] http://pytest.org/latest/parametrize.html#pytest-generate-tests

[2] This causes the number of tests to increase but it seems to be the
only way to get things working without larger changes.
@cdent
Copy link
Owner

cdent commented Apr 16, 2016

@keyhan, it looks like I've been able to fix it in #126. If you're able to test that before I release it that would be great, otherwise I'll sleep on it, look over it again tomorrow, and then release it.

It should work now. The basic problem was that results were not being managed during the running of the tests, but getting that concept pushed in turned out to be fairly complex.

@cdent cdent closed this as completed in 281a660 Apr 16, 2016
@keyhan
Copy link
Author

keyhan commented Apr 16, 2016

Thanks I will check it later today
On Apr 16, 2016 3:54 AM, "Chris Dent" notifications@github.com wrote:

@keyhan https://github.com/keyhan, it looks like I've been able to fix
it in #125 #125. If you're able to
test that before I release it that would be great, otherwise I'll sleep on
it, look over it again tomorrow, and then release it.

It should work now. The basic problem was that results were not being
managed during the running of the tests, but getting that concept pushed in
turned out to be fairly complex.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#124 (comment)

@cdent
Copy link
Owner

cdent commented Apr 16, 2016

I've got a release building now that will go out as 1.17.1 in a few minutes. It may not be perfect and still need some further refinements but it is at least better than the previous version...

So no need to do anything special when you are doing your testing later; just get the latest version from pypy. Would definitely like to hear your feedback whatever happens.

Thanks.

@cdent
Copy link
Owner

cdent commented Apr 16, 2016

The new version is working with the example you gave above. The main remaining issue is that when there is a failure the output is a mess, I've created an issue for that #127.

@keyhan
Copy link
Author

keyhan commented Apr 16, 2016

Hi, I am testing your latest from pip.
Now I do 1 failure as I should the printouts are ugly. But the new problem if I fix the same test so it passes, I see that 3 tests has passed instead of 1. Also I think the information shown on stdout should be the name of the tests and if they passed or not, similar to the gabbi-run command printouts.

@keyhan
Copy link
Author

keyhan commented Apr 16, 2016

This is what I get on screen when that one tests passes:

keyhan@keyhan-desktop:~/gabbi$ py.test --junitxml=/home/keyhan/gabbi/result.xml -svx test_keyhan.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
rootdir: /home/keyhan/gabbi, inifile:
collected 3 items

test_keyhan.py::test_gabbits::['start_driver_google_scenario_do_get_to_google.se'] <- ../../../usr/local/lib/python2.7/dist-packages/gabbi/suite.py PASSED
test_keyhan.py::test_gabbits::['driver_google_scenario_do_get_to_google.se'] <- ../../../usr/lib/python2.7/unittest/case.py PASSED
test_keyhan.py::test_gabbits::['stop_driver_google_scenario_do_get_to_google.se'] <- ../../../usr/local/lib/python2.7/dist-packages/gabbi/suite.py PASSED

-------------- generated xml file: /home/keyhan/gabbi/result.xml ---------------
=========================== 3 passed in 0.09 seconds ===========================

@cdent
Copy link
Owner

cdent commented Apr 16, 2016

Yeah, that's the result of the way fixtures are being managed. What you see there is a stopgap to deal with the separation of collecting and running tests and the way that gabbi fixture were initially implemented in way that integrates with unittest.TestSuite (instead of cases). It'll have to do until I come up with something better. I'll make an issue for that too so it is tracked.

I'm pretty sure all these issues can be overcome. It's just a matter of first overcoming my ignorance (or someone else beating me to it). Your help has been extremely useful (obviously).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants