Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"corruption" of .testmondata by an external exception (eg. Database connection or wrong environment) #66

Closed
blueyed opened this issue May 30, 2017 · 8 comments

Comments

@blueyed
Copy link
Contributor

blueyed commented May 30, 2017

Test node data gets saved after the test finishes (via set_dependencies in

testmon_data.set_dependencies(nodeid, testmon_data.get_nodedata(nodeid, self.cov.get_data(), rootdir), result)
), but when you then abort the test run using SIGINT (or -x triggers it), TestmonData.write_data is not skipped, which will not save the corresponding mtimes and file_checksums metadata.
(via #60 (comment))

I think set_dependencies should really only set them, but then be skipped also when not writing mtimes, or the mtimes should be updated together with the dependencies.

blueyed added a commit to blueyed/testmon that referenced this issue May 30, 2017
@tarpas
Copy link
Owner

tarpas commented May 30, 2017

The basic idea is that mtimes and file_checksums are just helper/optimization data for large projects. They must not be written after every executed test, because:

  • it's a little hard to get right
  • it's even more complicated to avoid performance penalty

The dependencies itself, however, should be written after every test, because some test suites take hours and we shouldn't loose all the data on interruption. So far I'm committed to fulfill this requirement.

In other words, current implementation is careful tradeoff between complexity of implementation and performance. And it's unlikely I would like to change it before refactoring the data model (which would fix this, by itself) and refactoring the test suite.

Could you rephrase your requirement in other terms? How do you experience the disadvantage now? Do you have a huge test file which you frequently run with -x and do you have slow start-up? Is there a printout which bothers you?

@blueyed
Copy link
Contributor Author

blueyed commented May 30, 2017

@tarpas
The problem is that since mtimes are not saves for a file, this is not considered at all when checking for changed files.

The simple test case is:

  • remove .testmondata
  • stop your database server
  • run pytest with testmon
  • abort it after seeing a lot of errors
  • start the database server
  • run pytest with testmon
  • notice that it just replays the failed tests, and won't run the others

This is bad already, but when you explicitly change a file where tests were recorded as failing, testmon will not pick it up: it seems to only look at mtimes for this, which is empty.

What do you think about #67 in this regard?

blueyed added a commit to blueyed/testmon that referenced this issue May 30, 2017
@tarpas
Copy link
Owner

tarpas commented May 30, 2017

Seems like a serious bug. The bug is only when you interrupt the first run (with no .testmondata)?

@blueyed
Copy link
Contributor Author

blueyed commented May 30, 2017

At least that is how I'm able to easily reproduce it (and #67 fixes that).
But my colleague and me had both noticed that testmon is not reliable anymore since a while.
I usually just tend to remove .testmondata then (which might trigger this particular issue when aborted again).

Also using -x is likely to trigger this, since it is an KeyboardInterrupt internally IIRC.

It might be related to b59154b after all.
The problem here is that only parts of the data are saved (for each finished item), but not the additional metadata.

@tarpas
Copy link
Owner

tarpas commented Jun 17, 2017

Daniel, after thinking about it I must say this is by design. When you read the text on http://testmon.org -> Thoughts -> "5. External services (reached by network)" you have to realize database is just that.

A test which ends early with database error is captured in .testmondata as depending only on the first couple of lines. After starting the database and re-running the test, testmon doesn't register any change, so it only re-reports the old exception and doesn't attempt to run the test again.

We could implement a switch/command which re-executes all failures regardless of the .testmondata. That would also mitigate #57 .

What UI do you recommend?

py.test --testmon --tlf
?
tlf as in testmon last failed

@tarpas tarpas changed the title Saves node data, but not its metadata when aborted "corruption" of .testmondata by an external exception (eg. Database connection or wrong environment) Jun 17, 2017
@blueyed
Copy link
Contributor Author

blueyed commented Jun 17, 2017

--tlf sounds good.

tarpas added a commit that referenced this issue Jun 17, 2017
--testmon --tlf forces reexecution of all failures even if the underlying source code didn't change. This serves if you wan't different error output or if you corrupted .testmondata through unaccesible external service or wrong virtualenv.
@tarpas
Copy link
Owner

tarpas commented Jun 17, 2017

@blueyed Can you have a look at #71 , please? Maybe you can test with your code and also have a look at the diff and suggest improvements. :)

tarpas added a commit that referenced this issue Jun 21, 2017
Re #66 , re #57
--testmon --tlf forces reexecution of all failures even if the underlying source code didn't change. This serves if you want different error output or if you corrupted .testmondata through unaccesible external service or wrong virtualenv.

# Conflicts:
#	test/test_testmon.py
@tarpas
Copy link
Owner

tarpas commented Jun 21, 2017

@blueyed I would consider this fixed for now. Maybe there could be heuristics which wouldn't write the data if too many tests fail too quickly. Maybe there could be an option for rollback of the last run, or some other UI which would let you choose if to write or discard the .testmondata changes.
But that's out of scope for not

@tarpas tarpas closed this as completed Jun 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants