Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testmon_data.fail_reports might contain both failed and skipped #103

Closed
blueyed opened this issue Jun 21, 2018 · 9 comments
Closed

testmon_data.fail_reports might contain both failed and skipped #103

blueyed opened this issue Jun 21, 2018 · 9 comments

Comments

@blueyed
Copy link
Contributor

blueyed commented Jun 21, 2018

testmon_data.fail_reports might contain both failed and skipped:

(Pdb++) nodeid
'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
(Pdb++) pp self.testmon_data.fail_reports[nodeid]
[{'duration': 0.0020258426666259766,
  'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
  'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
  'longrepr': None,
  'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
  'outcome': 'passed',
  'sections': [],
  'user_properties': [],
  'when': 'setup'},
 {'duration': 0.003198385238647461,
  'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
  'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
  'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
              '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
              'rest_framework/renderers.py:862: in render\n'
              '    codec = coreapi.codecs.CoreJSONCodec()\n'
              "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
  'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
  'outcome': 'failed',
  'sections': [],
  'user_properties': [],
  'when': 'call'},
 {'duration': 0.008923768997192383,
  'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
  'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
  'longrepr': None,
  'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
  'outcome': 'passed',
  'sections': [],
  'user_properties': [],
  'when': 'teardown'},
 {'duration': 0.0012934207916259766,
  'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
  'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
  'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
  'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
  'outcome': 'skipped',
  'sections': [],
  'user_properties': [],
  'when': 'setup'},
 {'duration': 0.026836156845092773,
  'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
  'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
  'longrepr': None,
  'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
  'outcome': 'passed',
  'sections': [],
  'user_properties': [],
  'when': 'teardown'}]
(Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
[<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
 <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
 <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
 <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
 <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]

I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

@tarpas
Copy link
Owner

tarpas commented Jun 23, 2018

@blueyed Could you please try to use the version ef3f6d3? It has a check against writing the corrupted data, so we could find out how it happened. I couldn't reproduce it.

@tarpas
Copy link
Owner

tarpas commented Jun 23, 2018

Also 39ee38a is next iteration.

@blueyed
Copy link
Contributor Author

blueyed commented Jun 23, 2018

@tarpas
Might make sense to commit this to master already - what you have done already.

I am running off my fix-collect-ignore branch currently, which I have rebased (#102).

@blueyed
Copy link
Contributor Author

blueyed commented Jun 25, 2018

Just hit the assert now:

collected 1418 items / 272 deselected                                                        

project/app/tests/test_api_bicycle.py F
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/_pytest/main.py", line 178, in wrap_session                                                   
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/_pytest/main.py", line 215, in _main                                                          
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 617, in __call__                                                    
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 222, in _hookexec                                                   
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 216, in <lambda>                                                    
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 201, in _multicall                                                   
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 76, in get_result                                                    
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 180, in _multicall                                                   
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "…/Vcs/pytest-testmon/testmon/pytest_testmon.py", line 211, in pytest_runtestloop                                                                           
INTERNALERROR>     self.report_if_failed(nodeid)
INTERNALERROR>   File "…/Vcs/pytest-testmon/testmon/pytest_testmon.py", line 170, in report_if_failed                                                                             
INTERNALERROR>     self.config.hook.pytest_runtest_logreport(report=test_report)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 617, in __call__                                                    
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 222, in _hookexec                                                   
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/__init__.py", line 216, in <lambda>                                                    
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 201, in _multicall                                                   
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 76, in get_result                                                    
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "…/project/.venv/lib/python3.6/site-packages/pluggy/callers.py", line 180, in _multicall                                                   
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "…/Vcs/pytest-testmon/testmon/pytest_testmon.py", line 228, in pytest_runtest_logreport                                                                     
INTERNALERROR>     assert report.when not in [r['when'] for r in self.current_reports], \
INTERNALERROR> AssertionError: project/app/tests/test_api_alerts.py::test_alert_filtering_bicycle_state setup                                                                            
INTERNALERROR> assert 'setup' not in ['setup', 'call', 'teardown']
INTERNALERROR>  +  where 'setup' = <TestReport 'project/app/tests/test_api_alerts.py::test_alert_filtering_bicycle_state' when='setup' outcome='passed'>.when                            

Before this I was interrupting pytest --testmon using Ctrl-C:

collected 1421 items                                                                        

…
velodrome/lock8/tests/test_api.py .F..................................F...F..         [ 11%]
velodrome/lock8/tests/test_api_alerts.py ................F..F.....                    [ 13%]
…
.......F.....F...F...........F..F.                                                    [ 19%]
velodrome/lock8/tests/test_api_bicycle_model.py .^C

Note that it goes through testmon's pytest_runtestloop already here - but that might be expected.

It looks like [r['when'] for r in self.current_reports] is too generic, i.e. looks at all reports and not the one in question only?!
(this happens with both master and my fix-collect-ignore branch)

@blueyed
Copy link
Contributor Author

blueyed commented Jun 25, 2018

It was before the latest data-format bump though, i.e. might have missed some fixes, but I could just reproduce it by Ctrl-C'ing with a removed .testmondata now.

To reproduce this it needs a failed test before Ctrl-Cing.

@tarpas
Copy link
Owner

tarpas commented Jun 25, 2018

So it was 2 separate runs?

  1. interrupted with CTRL+C
  2. interrupted with this assert?

@blueyed
Copy link
Contributor Author

blueyed commented Jun 25, 2018

Yes.

I could reproducible trigger the internal error (the assert) by removing .testmondata, running the tests, and Ctrl-C'ing after the first failed test, and then running pytest --testmon again.
I.e. the assert happened on the new run, not after Ctrl-C'ing.

Just tried to reproduce it with something simpler, but could not:

def test_pass1():
    pass


def test_fail():
    assert 0


def test_sleep():
    import time
    time.sleep(2)


def test_pass2():
    pass

But I've copied .testmondata for when it happened, and might be able to reproduce it in the project itself again (had quite some failing tests back then though, so maybe not).

@tarpas
Copy link
Owner

tarpas commented Jun 27, 2018

it shouldn't happen in v0.9.12 any more. Can you confirm?

@blueyed
Copy link
Contributor Author

blueyed commented Jun 27, 2018

Yes, I assume so - at least I've seen something in this regard when re-testing for #101.
Let's close it for now.
Thanks!

@blueyed blueyed closed this as completed Jun 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants