-
-
Notifications
You must be signed in to change notification settings - Fork 3k
Catch and raise OutcomeException's instead of catching all exceptions #1394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hmm, this does break |
|
the description doesn't match the patch there should be documentation that this is a workaround and will eventually be killed and/or a pytest_warning there should be tests for this change that does in fact change external behaviour |
|
|
|
I added a test case which exemplifies my abnormal use, probably even abuse, of a custom skip fixture which injects fixture values into the globals context for proper truthiness evaluation... |
b160738 to
a58ec32
Compare
|
there are some real failures in there, can you have a look? |
|
Yes, I noticed but still haven't had the time to get them fixed. I'll take care of them as soon as possible. |
|
thanks for the update :) take your time |
|
@s0undt3ch just a gentle ping. 😁 |
|
@s0undt3ch, we plan a release around 1 week earlier than EuroPython... do you think you could work on finishing this PR until then? |
|
Creating your own mark evaluator is not really a public API. I think an alternative way, and slightly better/more officially supported way, of doing this is by using a mark on the test: |
|
at first glance it looks like this one is not going anywhere at the current point in time, @s0undt3ch please open a new one if you find some time to fix the merge conflicts/test failures |
|
Sorry for being so slow on this one. Yes, I'll reopen when I have more time to work on this. |
Recently I needed to create my own custom
MarkEvaluator, I called itFixtureMarkEvaluator.It looks up at fixture results and decides to skip a test based on that:
The alternative would be to skip the test on the test function body, but by then, our DB prep routines(which are extensive) would have been executed only for the test to be skipped.
This way, we skip before those prep routines are even executed.
I ended up with something like:
This would just work, however, again, for our specific use case, we depend on an external library on one of the fixtures for which we use this custom marker, and calling
pytest.skipon it would just make the exception get caught by this catch all exception and we would end up with a not so helpful error message:The approach in this PR is to just catchOutcomeExceptionsand raise them as opposed to just let them get "caught" by the catch all exception.The approach in this PR is to just catch
Skippedand raise it as opposed to just let it get "caught" by the catch all exception(we don't catchOutcomeExceptionbecause while evaluating_istrue()aFailedexception can be raised and catchingOutcomeExceptionwould change the wayFailedis handled.Please consider this PR as is, catching and re-raisng theOutcomeExceptionsshould not trigger any regression, if fact, the skipping code should know how to handle outcome exceptions and not treat them as any other unknown kind of exception.Please consider this PR as is, catching and re-raisng the
Skippedshould not trigger any regression, if fact, the skipping code should know how to handleSkippedexceptions and not treat them as any other unknown kind of exception.If a test case is mandatory, I'd have to recreate our use case in a single test(possible, but probably very specific and not that much helpful as an example), still, I could try to create such test to get this code in since it's breaking our pytest usage experience...