Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running nose tests reports the wrong number of tests passing or failing. #4535

Closed
d3r3kk opened this issue Feb 25, 2019 · 1 comment
Closed
Labels
area-testing bug Issue identified by VS Code Team member as probable bug

Comments

@d3r3kk
Copy link

d3r3kk commented Feb 25, 2019

Environment data

  • VS Code version: 1.31.1
  • Extension version (available under the Extensions sidebar): 2019.2-RC (Also tested in 2019.1 where the behaviour is also exhibited)
  • OS and version: Windows 10
  • Python version (& distribution if applicable, e.g. Anaconda): Python 3.7.2
  • Type of virtual environment used (N/A | venv | virtualenv | conda | ...): venv
  • Relevant/affected Python packages and their versions: nose==1.3.7

Expected behaviour

Test results report 1 pass and 1 fail.

Actual behaviour

Test results report 2 passes and no failures when the passing test is run.
Test results report 2 fails and no passes when the failing test is run.


nose_test_duplicating_results

Gif showing that running the test from code lens duplicates the test results


Steps to reproduce:

  1. Create a workspace
  2. Set up venv for the workspace in a Powershell window:
    • cd <workspace>
    • py -3.7 -m venv .venv
    • .venv/Scripts/Activate.ps1
    • python -m pip install -U pip
    • python -m pip install nose
  3. Open VS Code and open the workspace folder you created.
  4. Add a single file called test_nose.py to the workspace.
  5. Add the following code to test_nose.py:
def test_passing():
    assert 42 == 42

def test_failure():
    assert 42 == -13
  1. Click the Run Tests button in the status bar.
  2. Click on the Run All Unit Tests in the command bar selection that pops up.
  3. Note that the two test methods are marked with code-lens appropriated (test_passing is marked as passing, test_failure is marked as not passing).
  4. Click on the Run Test of the test_passing code lens.
    • Note that 2 tests are reported as passing.
  5. Click on the Run Test of the test_failure code lens.
  • Note that 2 tests are reported as failing.

Logs from Python::Python Test Log

Logging from Python Run All Unit Tests

.F
======================================================================
FAIL: test_nose_tests.test_failure
----------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\.venv\lib\site-packages\nose\case.py", line 198, in runTest
    self.test(*self.arg)
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\test_nose_tests.py", line 6, in test_failure
    assert 42 == -13
AssertionError

----------------------------------------------------------------------
Ran 2 tests in 0.005s

FAILED (failures=1)

Logging from Clicking the Run Test Code Lens of the passing test:

..
----------------------------------------------------------------------
Ran 2 tests in 0.001s

OK

Logging from Clicking the Run Test Code Lens of the failing test:

FF
======================================================================
FAIL: test_nose_tests.test_failure
----------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\.venv\lib\site-packages\nose\case.py", line 198, in runTest
    self.test(*self.arg)
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\test_nose_tests.py", line 6, in test_failure
    assert 42 == -13
AssertionError

======================================================================
FAIL: test_nose_tests.test_failure
----------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\.venv\lib\site-packages\nose\case.py", line 198, in runTest
    self.test(*self.arg)
  File "c:\dev\github\d3r3kk\test\bugbash_feb_2019\test_nose_tests.py", line 6, in test_failure
    assert 42 == -13
AssertionError

----------------------------------------------------------------------
Ran 2 tests in 0.002s

FAILED (failures=2)

Output from Console under the Developer Tools panel

Not very interesting

console.ts:134 [Extension Host] Python Extension: Cached data exists ActivatedEnvironmentVariables, c:\dev\github\d3r3kk\test\bugbash_feb_2019
console.ts:134 [Extension Host] Python Extension: getActivatedEnvironmentVariables, Class name = b, Arg 1: <Uri:c:\dev\github\d3r3kk\test\bugbash_feb_2019>, Arg 2: undefined
console.ts:134 [Extension Host] Python Extension: Cached data exists ActivatedEnvironmentVariables, c:\dev\github\d3r3kk\test\bugbash_feb_2019
console.ts:134 [Extension Host] Python Extension: getActivatedEnvironmentVariables, Class name = b, Arg 1: <Uri:c:\dev\github\d3r3kk\test\bugbash_feb_2019>, Arg 2: undefined
console.ts:134 [Extension Host] Python Extension: Cached data exists ActivatedEnvironmentVariables, c:\dev\github\d3r3kk\test\bugbash_feb_2019
console.ts:134 [Extension Host] Python Extension: getActivatedEnvironmentVariables, Class name = b, Arg 1: <Uri:c:\dev\github\d3r3kk\test\bugbash_feb_2019>, Arg 2: undefined
@ghost ghost added the triage-needed Needs assignment to the proper sub-team label Feb 25, 2019
@d3r3kk d3r3kk added bug Issue identified by VS Code Team member as probable bug needs PR area-testing labels Feb 25, 2019
@ghost ghost removed the triage-needed Needs assignment to the proper sub-team label Feb 25, 2019
@karthiknadig
Copy link
Member

Closing via #16371

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 17, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-testing bug Issue identified by VS Code Team member as probable bug
Projects
None yet
Development

No branches or pull requests

3 participants