Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move @fails annotation into main test decorators #24

Closed
DRMacIver opened this issue Jan 13, 2015 · 5 comments
Closed

Move @fails annotation into main test decorators #24

DRMacIver opened this issue Jan 13, 2015 · 5 comments
Labels
enhancement it's not broken, but we want it to be better
Milestone

Comments

@DRMacIver
Copy link
Member

I was reading "Software Testing with Quickcheck" by John Hughes and in it he points out that keeping around failing properties with a marker that they fail is actually a useful thing which people should do as part of their test suites. This seems like an eminently fair point. I already have something like this in the tests for my test decorators. It currently depends on py.test but could easily be made not to. Do this and merge it into the main API.

@DRMacIver DRMacIver added this to the 0.4 milestone Jan 13, 2015
@DRMacIver DRMacIver added the enhancement it's not broken, but we want it to be better label Jan 18, 2015
@Zac-HD
Copy link
Member

Zac-HD commented May 11, 2017

See test.common.utils.fails_with; I'm also adding a similar decorator for "this test is for a deprecated thing" as part of #599.

@Zac-HD Zac-HD closed this as completed May 11, 2017
@DRMacIver
Copy link
Member Author

I think this issue was actually about making fails part of the public API, but I don't think this was ever a good idea and people should be using pytest's xfail or equivalent for it really (maybe we should too)

@Zac-HD
Copy link
Member

Zac-HD commented May 11, 2017

Ah, right. I agree that it's a separate concern to the rest of the Hypothesis API, and users should look to their test runner of choice. Probably still useful for us to have a runner-agnostic fails_with though.

@DRMacIver
Copy link
Member Author

Yup. It's perfectly sensible to have extra decorators like this internally and/or in the test helpers, just not part of the public API. Past-@DRMacIver had some odd ideas about API design which I've had to unlearn the hard way. :-)

@Zac-HD
Copy link
Member

Zac-HD commented Jan 31, 2018

people should be using pytest's xfail or equivalent for it really (maybe we should too)

For posterity: @pytest.mark.xfail(raises=FooError) is semantically different to @fails_with(FooError) - the former indicates that raising FooError is expected behaviour due to an unfixed bug (and the test is reported as xfailed rather than passed), while the latter indicates that the test should raise FooError to pass.

Our internal decorator is instead more like a decorator form of with pytest.raises(FooError): ....

Macavirus pushed a commit to Macavirus/hypothesis that referenced this issue May 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement it's not broken, but we want it to be better
Projects
None yet
Development

No branches or pull requests

2 participants